Compiling Giac in Windows (VS2008)

Finally, the day has come when I can proudly write that I have successfully compiled a computer algebra system in windows using visual studio and I actually was able to compile it as a dynamic library and link it with another C++ program that uses it. Maybe in other operating system with gcc compiler this is a trivial task, in windows with msvc this is for me far from trivial.

Here is a walkthrough. It would help if you understood the steps in compiling giac with gcc (either with mingwc or in a cygwin environment) in windows. You can look at my previous blog post to get an idea. If you just want to compile and not think of the technicalities (steps below), you may download a project archive here.

First I explain how to compile the project (not link)

  1. Make sure you make a project with the following c++ source files:
    alg_ext.cc
    cocoa.cc
    csturm.cc
    derive.cc
    desolve.cc
    ezgcd.cc
    first.cc
    gauss.cc
    gausspol.cc
    gen.cc
    global.cc
    help.cc
    identificateur.cc
    ifactor.cc
    index.cc
    input_lexer.cc (and input_lexer.ll)
    input_parser.cc
    intg.cc
    intgab.cc
    isom.cc
    lin.cc
    maple.cc
    mathml.cc
    misc.cc
    modfactor.cc
    modpoly.cc
    moyal.cc
    pari.cc
    permu.cc
    plot.cc
    plot3d.cc
    prog.cc
    quater.cc
    risch.cc
    rpn.cc
    series.cc
    solve.cc
    sparse.cc
    subst.cc
    sym2poly.cc
    symbolic.cc
    tex.cc
    threaded.cc
    ti89.cc
    tinymt32.cc
    unary.cc
    usual.cc
    vecteur.cc
    
  2. For the sake of completeness I provided a sample project file with the corresponding source codes. global.cc had to be patched because we use MPIR.

  3. Visual studio may not have stdint.h and some code may use it. I have included a stdint.h in the project archive that I am going to share. Otherwise, it is easy to add the necessary typedef from stdint.h. You can create your own stdint.h by creating the following file:

    typedef __int8 int8_t;
    typedef unsigned __int8 uint8_t;
    typedef __int32 int32_t;
    typedef unsigned __int32 uint32_t;
    typedef __int64 int64_t;
    typedef unsigned __int64 uint64_t;
    

    I also patched global.cc (also in the project archive I am sharing) because mpir already defines R_OK (which is defined in global.cc). See the next item for explanation. The patch just changes line ca. 172 that defines R_OK for the first time. To avoid conflict with mpir I changed this definition to

    #ifndef HAVE_LIBMPIR 
      int R_OK=4;
    #endif
    
  4. Make sure you have the following preprocessor definition set in the Project Properties->Configuration Properties->Preprocessor
    WIN32
    WINDOWS
    HAVE_CONFIG_H
    IN_GIAC
    GIAC_VECTOR
    _USE_MATH_DEFINES
    VISUALC (not __VISUALC__)
    _CRT_SECURE_NO_WARNINGS
    _SCL_SECURE_NO_WARNINGS
    _CRT_NON_CONFORMING_SWPRINTFS
    STATIC_BUILTIN_LEXER_FUNCTIONS
    NO_CLOCK (clock() is not standardly defined in msvc)
    HAVE_NO_CWD (getcwd is not standardly defined in msvc)
    MS_SMART (otherwise in gen.cc #include "../../../_windows/src/stdafx.h" would cause error, stdafx.h does not reside in this relative directory by default, without this we also get error in ifactor.cc when calling PREFETCH)
    NO_UNARY_FUNCTION_COMPOSE
    MS_SMART
    HAVE_LIBMPIR (needed for a patch in global.cc when using mpir, R_OK is already defined. See attached files)
    
  5. Make sure in config.h of the original giac source code, the following are NOT defined:
    HAVE_GETCWD (getcwd  is not standardly defined in msvc)
    HAVE_SYS_TIME_H
    HAVE_LIBPTHREAD (otherwise you get a lot of VLA errors in vecteur.cc)
    HAVE_PTHREAD_H (otherwise you get a lot of VLA errors in vecteur.cc)
    HAVE_READLINE_HISTORY_H (they are not standardly available in msvc)
    HAVE_READLINE_READLINE_H (they are not standardly available in msvc)
    

    Also uncomment

     #define HAVE_NO_CWD 1 //(getcwd  is not standardly defined in msvc) 
  6. In line ca. 1634 of vecteur.cc where the method rand_1 is defined, the code uses rand() which is ambiguous in VS2008 (error C2660). I took the liberty and changed rand() to std::rand() (I hope this is what the author of the code wanted).
  7. In line ca. 35 of rpn.cc, after #ifndef NSPIRE there is #include which is nonexistent in MSVC. Workaround:
    changed #ifndef NSPIRE to #if !defined(NSPIRE) && !defined(__VISUALC__)
  8. In the original static_lexer.h, for some reasons the unicode literals are not understood as variables (probably an msvc setting that I have not figured out how to set). At the moment the fast and dirty (since I don’t think I am going to use these specific giac commands) solution I could think of is to enclose the line with at_LINEAR? and the last two lines of the code of static_lexer.h (where unorthodox characters are used as “at” variables) between #ifndef __VISUALC__ and #endif. This patch of static_lexer.h is also in the archive I am sharing.
  9. In some compilers input_lexer.ll and input_lexer.cc will cause some problems when only std::log(10) is used (compiler cannot detect what type of overload of log this is). Therefore I replace std::log(10) with std::log(10.0)

After doing all these, you should not get any serious compile errors (maybe you get some that I have overlooked, but I am sure they are easy to resolve). The next hurdle is to link. I made a fix but I am not sure if this is the best fix. There are global unary functions that are supposed to be defined somewhere and called by static_extern.h. The linker initially complained about unresolved external symbols: at_ugamma, at_regroup, at_is_inside, at_igamma, at_IP and at_FP. Maybe Bernard can help me on this, but the only way I could find to link was just to enclose these external global initialization (which should be defined somewhere, but where?) within #ifndef __VISUALC__ and #endif. This I did in both static_lexer.h and static_extern.h

After doing this, I was able to finally link and compile. One thing that is not mentioning is that I used CMake to export dll as it is often required in Visual Studio to apply __declspec(dllexport) to the methods we want to export when compiling the dynamic library and apply __declspec(dllimport) when importing from the library. The new CMake (not so new anymore) allows you to do this (with additional parameters in Pre-Link Build Events of Visual Studio). I will not go into detail about this, this can be read here .
This allowed me to both created a giac dll and a lib. The end result is around 8.7Mb for both the dll and the lib, but since the dll is only shared I find this a significant improvement to the size you get from compiling with mingw32.

Finally, to test that it works. I linked giac to another project having only one source (make sure you add the include directories of giac, mpir and mpfir in C/C++ -> Additonal Include Directories of the project property):

#include <giac/config.h>
#include <giac/giac.h>

using namespace std;
using namespace giac;

gen pgcd(gen a,gen b){
  gen q,r;
  for (;b!=0;){
    r=irem(a,b,q);
    a=b;
    b=r;
  }
  return a;
}

int main(){
  cout << "Enter 2 integers ";
  gen a,b;
  cout << "Trying a giac self-implementation of gcd:\n";
  cin >> a >> b;
  cout << pgcd(a,b) << endl;
  return 0;
}

I was able to compile, link and execute this program and get the result similar to the one I got in my last blog (compiling in mingw32).

As promised here is a 7zip file containing the project and the giac source I compiled with (and the setup needed in VS2008). Note: I only set Release configuration and you still need to link/compile with mpir (you can get the source here) and set it up in the project for this to work.

Compiling Giac in Windows (Cygwin)

I tried many times to compile giac for windows. Giac is a computer algebra system that has the advantage of being very flexibly adaptable to many operating system (if I remember correctly these include: linux, macintosh, WinCE, Android, windows). I believe that the developer(s) (mainly Bernard Parisse) mostly develop for linux. But they made sure that this can be compiled in many other operating systems. They even provided makefiles adapted for different operating systems. As with many other software that claims universal compilability, one needs to still work to get this compiled in an operating system foreign to the developers.

In the past my compilation failed because of some dependencies that I did not understood or because of some flags that I did not set. So after many years, while I was idle in some math conference, I decided to write to the developers and ask their help. Initially I wanted to join their forum, which would allow easier communication between developers and users. It took a while for me to get into their forum. For some reasons, many computer algebra systems seem to have discussion forums that are dormant or very difficult to register. I do remember the same problem when I wanted to register to the Singular forum. After successfully joining the forum, it somehow exhausted me so I did not pursue the question of compiling giac in windows. This was not a wise decision, still after a year I saw the need to compile giac. Switching to linux was a last resort option for me. My developments are so deeply windows rooted that I had to do everything and port to linux and the workload could be much more than trying to find a way to compile giac.

After posting the question in their forum, I was surprised how helpful Bernard was in guiding me. I had three steps in mind if I wanted to have the giac library working in windows:

  1. successfully compile in Cygwin
  2. successfully compile in Mingw32
  3. Successfully compile a library for Visual Studio (2008)

I am glad to announce that I am satisfied with: compiling for cygwin and compiling with mingw32. My next and final goal is to try it in visual studio (which belongs to a separate blog.). I promised Bernard I will document this in a blog because there seems to be a lack of information for early developers using giac (the forum and some old site writes about it, but giac deserves much much more).

To start the walkthrough let me first write that many things (unknown to many people) use giac. Among many others softwares and hardwares the following depends on giac: HP Prime Calculator, TI NSpire calculator, Geogebra. I really believe this boils down to its flexibility playing well in different operating system and environments and its minimal system requirement (which I don’t know what is, but the fact that low spec calculators can work with it is amazing already).

Let us now start with the walkthrough for cygwin. Warning: since this is personalized some filenames in my tutorial uses my initial name (jose) but you can use whatever name you want.

  1.  Download the latest giac source code from  https://www-fourier.ujf-grenoble.fr/~parisse/giac/giac_stable.tgz
  2. Decompress the archive and go to its root directory and run ./configure. This will search installed components and and adjust the config.h in src folder accordingly (make sure you have installed at least gmp in cygwin)
  3. Compare the created config.h with config.h.win64 (in src directory) and check if there is anything wrong. For instance, in my case I specifically did not want to have FLTK (even though I had it already installed in cygwin, as this was only xcas related). So I did the following additional changes:
    • I removed all the FLTK options in config.h
    • I removed all NTL and INTL from the options

    Here, by “remove”, I mean I commented them out instead of trying to use #define to 0 because this wouldn’t work (for me). Giac checks predefinitions by #ifdef and not with #if

  4. In the source directory copy Makefile.win64 to Makefile.jose and edit Makefile.jose
  5. In Makefile.jose:
    • remove the capital letter objs in “OBJS = ” (they are related to GUI application, xcas, etc. which is not what I wanted to build. I want to build giac independent on unnecessary libraries)
    • remove Tmpl…, TmpFLG… and other capital letter obj files from “GIACOBJS=”
    • remove: -lintl.dll, -lintl, -lntl, -ftlk** (anything fltk related), libreadline.a, libhistory.a, libncurses.a, -lgsl, -lgslcblas, -llapack (I did not want to build with blas and lapack).
    • I left mpfr (and if you want you can put in libpari, but that will be discussed later) because I found multiprecision floating point kind of necessary. Do not remove gmp! (and if you want leave ntl but I did not experimented with that)
  6. Execute
    make -f Makefile.jose giac.dll
    in src directory

After you have done all this, you can execute a minimal program (Bernard provided me with a minimal program that try giac.dll. Thank you Bernard if you are reading this!). Namely save the following code as jose.cc (I saved in the src directory, but you don’t need to be as dirty as me!):

#include <giac/config.h>
#include <giac/giac.h>

using namespace std;
using namespace giac;

gen pgcd(gen a,gen b){
  gen q,r;
  for (;b!=0;){
    r=irem(a,b,q);
    a=b;
    b=r;
  }


  return a;
}

int main(){
  cout << "Enter 2 integers ";
  gen a,b;
  cin >> a >> b;
  cout << pgcd(a,b) << endl;
  return 0;
}

Now in the same directory (asuming obj files are also there) run:

To compile
g++ -g -I. -DWIN32 -DHAVE_CONFIG_H -DIN_GIAC -DUSE_OPENGL32 -fno-strict-aliasing -DGIAC_GENERIC_CONSTANTS -c jose.cc

To link an create an executable called jose.exe
g++ -g -I. -DWIN32 -DHAVE_CONFIG_H -DIN_GIAC -DUSE_OPENGL32 -fno-strict-aliasing -DGIAC_GENERIC_CONSTANTS gl2ps.o jose.o -o jose giac.dll -mwindows -L/usr/local/lib /usr/lib/libreadline.a /usr/lib/libhistory.a /usr/lib/libncurses.a -lole32 -luuid -lcomctl32 -lwsock32 -lglu32 -lopengl32 -ldmoguids -lgsl -lgslcblas -lrt -lpthread -ldl -lmpfr -lgmp -lz

If things went well you can run the program which computes the gcd of two numbers. Here is the output:

E:\cygwin\usr\src\giac\giac-1.2.3\src\bin\Release>jose
Enter 2 integers 2 3
1

In a next blog I will explain how to compile it independent of cygwin (mingw32). I also will discuss some methods to optimize the output by dynamically linking to standard libraries.

Animating in Mathematica

M. gave us a lecture on how $S_4$ can be regarded as rigid transformation (rotations about the origin) of a cube centered at the origin. This was not straightforward for me to visualize. So ,after a long contemplation what to use: OpenGL/C++, Povray+Blender, Mathematica and its very useful animation features, I ended up deciding to use Mathematica. I had an intuition that Mathematica, albeit not being perfect in animation and visualization like Blender and Povray will provide for a very fast result (which at the moment was a priority for me). To visualize this you can use the following code:

g=Graphics3D[{Opacity[0.3], Cuboid[{-5, -5, -5}, {5, 5, 5}], 
  Opacity[1.0], Thick, Green, Line[{{-5, -5, -5}, {5, 5, 5}}], 
  Line[{{5, -5, -5}, {-5, 5, 5}}], Line[{{-5, 5, -5}, {5, -5, 5}}], 
  Line[{{-5, -5, 5}, {5, 5, -5}}], Black, 
  Text[Style[1, Large, Bold, Red], {5, 5, 5}],
  Text[Style[2, Large, Bold, Red], {5, -5, 5}],
  Text[Style[3, Large, Bold, Red], {-5, 5, 5}],
  Text[Style[4, Large, Bold, Red], {-5, -5, 5}]},Boxed->False,PlotRange->{{-9,9},{-9,9},{-9,9}}];
Animate[MapAt[Rotate[#, t,{0,5,5},{0,0,0}]&,g,{1}],{t,0,Pi},
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"Transposition(1,3)"]
Animate[MapAt[Rotate[#,t,{5,5,5},{0,0,0}]&,g,{1}],{t,0,2Pi/3},
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"3-Cycle (2,3,4)"]
Animate[MapAt[Rotate[#,t,{0,0,1},{0,0,0}]&,g,{1}],{t,0,Pi/2},
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"4-Cycle (1,3,4,2)"]
Animate[MapAt[Rotate[#,t,{0,0,1},{0,0,0}]&,g,{1}],{t,0,Pi},
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"2 Transpositions (1,4)(2,3)"]

Yes combining the Animate in one single panel was rather a pain as I did not know how to change variables (axis and angle of rotation) all in one panel (I wasn’t able to do it with Buttons and Dynamic in Mathematica).

If you do not have Mathematica, you can still download the animation file here. Now I am proud to claim that I did a Mathematica animation on my own: Time from 0-knowledge to one animation = Roughly 1 hour. If this was Blender+Povray, although I have experience in working with them, I would still need much more time to model, texture and finally generate the animation. So my conclusion is: although the animations and features of Mathematica has many things left to be desired, it is probably the best choice if you want to animate a few mathematical object very fast on the fly. The 1hr work I invested, is now a 5min. work for any other remainder animation I want to make in Mathematica. If the animation I want to do would require much more time than this, e.g. complicated interactions with buttons, phong and radiosity etc., I would probably look at other options as well.

To save an avi you may use something like the following (notice: I have hidden sliders and panel for a better look of the video).


Export["C:/temp/output1.avi",
Manipulate[MapAt[Rotate[#, t,{0,5,5},{0,0,0}]&,g,{1}],{t,0,Pi,
AnimationRunning->True,AnimationDirection->ForwardBackward,AnimationRepetitions->1,Paneled->False,ControlType->None},
FrameLabel->"Transposition(1,3)"]
]

This results in a very poor video compression (quality of videos were OK, but for 4sec. I got 35Mb of video). But that is easily remedied by using Handbrake (I do this if I don’t have too much time to play around more complicated tools) or VirtualDub (which I only use if I have a lot of time to invest.. which is sadly less often the case). I combined the four different videos showing a permutation from each conjugacy class of $S_4$. Here is the final video after compression (also a link was given earlier for you to download the video in case you do not have HTML5 support in your browser):

Mathematica and me

$
\newcommand\R{\mathbb{R}}
$
The last time I think I was serious with Mathematica was 2001? or maybe a few years earlier. I really cannot remember. But I left it and always looked at Maple for enlightenment (where I was/am not still good at). In the past when I looked at Mathematica I thought of it as a Maple similar and probably any amateur would still believe this to be. Both very clumsy to work with. Now, 2016, they differ so much and have advanced so much. One is good at one thing, the other at another. But this is not a debate which is better. This is a “tutorial” what you can do a little good with one. Today Mathematica won my heart (another day it could be Maple). Because the topic “Cylindrical Algebraic Decomposition” (which we in the real world, i.e. real algebraic geometry world, just simply call “cylindrical decomposition”) expresses the strength of Mathematica.

A Mathematica code is a bit different from a C++ code (or even Maple) that I am more used to. So let me list down some things I (think I) learned during the coarse of a few days looking at one or two codes:

  • f @ something means apply f to x
  • f @@ List[...] means change List to f
  • f @@@ List[List[...]] means List[f[...]]

The first problem that I had is the following: Given an real algebraic set defined by a polynomial function $f:\R^m\rightarrow \R$ , find the number of connected component of the complement of this set. Also find an “algorithm” that can tell if two elements in $\R^m$ belong in the same connected component.

For the sake of argument I took $m=2$ and variables to be $x$ and $y$. Then I wanted to find the connected components of all those $(x,y)$ for which $f(x,y)\neq 0$. To this I copied and modified a code from a mathematica stackexchange post and the modified code looks like this

Coco[eqns_, xrange_: {x, -1, 1}, yrange_: {y, -1, 1}, 
  plotit_: False] := 
 Module[{decomp, connected, regconn}, 
  regconn = 
   Resolve@Exists[{x, y}, (x | y) \[Element] Reals, 
      RegionMember[
       RegionIntersection[ImplicitRegion[#1, {x, y}], 
        RegionBoundary@ImplicitRegion[#2, {x, y}]], {x, y}]] &;
  decomp = 
   List @@ BooleanMinimize@CylindricalDecomposition[eqns, {x, y}];
  connected = 
   Or @@@ ConnectedComponents@
     Graph[decomp, 
      UndirectedEdge @@@ 
       Select[Subsets[decomp, {2}], 
        regconn @@ # || regconn @@ Reverse@# &]];
  Print["number of connected components: ", Length@connected];
  If[plotit,Print[
    (Quiet@
         RegionPlot[#, xrange, yrange, 
          PlotPoints -> 100] & /@ {decomp, connected})~
     Join~{FullSimplify[connected, (x | y) \[Element] Reals]}]]; 
  Return[connected]]

As you can see the code just simplifies the regions defined by cylindrical algebraic decomposition using disjunctive normal form and then finds the regions of intersection by identifying each region as a vertex of a graph and then connecting the vertices if they have common intersections. Then Mathematicas (quit powerful) graph theory package can figure out the connected components of the graph and thus identify exactly the connected components of the Zariski open set defined by the complement of the hypersurface. This, I think, is an overkill. You really need not convert everything into a graph and use Mathematica’s connected component procedure for graph. You just need to use pigeon-hole principal by placing each region in a set containers and iteratively by order fill the set containers depending on common intersection of region or make a new container if there is no existing set container having region with common intersection. Since I was/am a lazy animal and the procedure was fast enough for my purpose, I left it as it was.

I tested this on a curve for which I knew how many connected components it’s complement have, namely the curve $y^2=x(x-1)(x-2)(x-3)(x-4)$ the plotof which looks like this:
curve_4_coco

After applying the Coco on the curve by typing (the three other parameters in Coco are optional, they are only used to plot the regions)

connected = Coco[y^2 != x(x-1)(x-2)(x-3)(x-4),{x,0,5},{y,-10,10},True];

I get the picture for the cylindrical algebraic decompositions

the picture of the different components

the number of components and the actual regions given (click to see in full glory)

The counting of component for this function, without plotting, took around 6.7 seconds in my not-so-fast laptop. Understandably, most of the time was sucked by the plotting. The plotting does not concern me a lot so I am happy with this. I think it can even be made faster (I predict, with correct coding and optimization you can squeeze it to 2 seconds).

Hopefully, in a future post I will show how this is done (with less pictures) when the hypersurface lies in a space with dimension greater than two (i.e. $m>2$).

multiple screensaver randomized at a time interval

Anyone who worked with linux might be more familiar with XScreensaver. It was only late when I got myself familiar with it. But I was also quite annoyed that I didn’t had this possibility of using multiple screensaver that changes at a fixed time interval, say every 1 minute, alone on the first call of the screensaver by a Windows operating system. I think there is the possibility for a software to change the default screensaver to another random screensaver, but what I wanted was XScreensaver style in windows. I wanted one call of screensaver uninterrupted by the user input to change to another screensaver while the computer is still idle after a specific time interval.. and I wanted randomness as well. I searched high and low and found ScreenMonkey from the WWW. I think it has some .NET dependencies. In any case, I was a bit annoyed that I had to pay for it in order to get rid of the monkey between screensavers. It didn’t bother me that much, but I was still challenged to solve the problem on my own by making my own multiple screensaver randomizer.

After a little work I created multiscr for Windows. I’m sure it is a bit buggy and might not fit to everybody’s liking. After all, I did it only to please myself and not for other peoples consumption. Nevertheless, I’d like to share this, in case someone is interested in using it: Click here to download. A few words of warning though:

  • The screensaver does not use Windows Registry to save user’s choice of screensaver. It saves everything in the %SYSTEMROOT% directory (usually in Windows/System32). So this directory should be writable to the program.
  • Time interval to switch between screensavers should not be less than 10 seconds
  • The program hooks to mouse and keyboard event to stop screensaver when the user makes a mouse or keyboard input (I’m doing everything manually instead of linking directly to Scrnsaver.lib which I personally find quite restrictive. This seems to work quite fine.)
  • The program simulates a mouse (1 pixel movement) event to force the stop of the chosen screensaver to switch to another random screensaver after the given time interval.

I can release the code for free access upon request. At the moment I am not doing so because I need to make the extra work of making the code “better readable”, removing/customizing some of my unnecessary internal codes (which isn’t necessary to share for this program), put in a visual studio project file for ease of compiling and putting some kind of text about license or sharing. At the moment, I’m just too lazy esp. if not a lot of people are interested in the code anyway. Send me a line if you are interested.

Generalizing Galois Theory for Commutative Rings - Part I

I am not sure why this idea has lost popularity after the 60’s. Papers appear about this, but more people seemed to be interested in it half a century ago. I would say, the idea started independently. On one hand we had a group of noncommutative algebraist and homological algebraist working on ideas that was probably once inspired from the category of fields. For instance separable algebras (separable fields) and von Neumann regular rings with its semiheriditary and quasi-inverse property (not far related to fields and product of fields) were, in my opinion, quit popular among noncommutative algebraist and homological algebraist. Then there were a group of people who purposely wanted to see ideas developed for Galois theory extended to rings. We have now definitions for separable ring extensions, splitting ring extensions and even algebraic extensions of rings (which is not the usual algebraic extension we would intuitively define). The last topic (algebraic extension) was studied by Borho, Enochs, Hochster and Raphael.

Having said that, I decided to add myself into the set of cooks (to make a better broth). Recently I proved the following for instance:

Proposition Let $A$ be a Baer ring and $B$ be its total integral closure (this is also called algebraic closure by Robert Raphael) and suppose $f\in A[x]$ is a non-zero monic polynomial over $A$. Consider the set of zeros $S$ of $f$ in $B$. Then $A[S]$ is a finitely generated module over $A$.

The proof is a bit technical and to share it I am going to give a lecture about it and write an article about this (to be continued…).

Edit: I have a lot of new results here but I decided not to write a second post about this yet. I think a pdf file is better for this kind of thing. My paper related to this topic and proofs can be found here.

Motorola Droid Mainboard

Short Hint: Do not disassemble your smartphone to replace the mother board battery. For a long story read the whole blog. To get to the final point, read the last paragraph of the blog…

Long time ago I bought my motorola droid I, second hand, for around €30. The smartphone has served me well even till now. In fact, it is still my first and only smartphone. Needless to say, slowly it is showing signs of its age. Now when I remove the phone battery from the back, the time is set to 01:00 and the date gets set to 01.01.1970. I figured that such smartphones should have an internal mainboard memory that will keep time setting intact even after removing the removable (or rather, “easily removable”) battery. In fact, it does. So I decided to disassemble my phone and replace this internal battery. If you observe carefully, the video where a person disassembles (I would say, rather unprofessionally) a motorola droid will show the internal battery:

I write “unprofessionally” because there is a video of a disassembly of a Samsung Galaxy that is so detailed and professionally done (with “correct” household equipments and in a correct way) that all other smartphone disassembly videos pales in comparison to this one:

Now, after disassembling. I noticed that I wasted my time doing so. There is indeed a button cell attached to the phone mainboard, I think it fits a CR2430 (almost 3mm in diameter). BUT.. the cell is soldered to the board. So no way of just slipping the battery away from the board. The solders are very fine, I would guess that i need a very fine and stable soldering machine to unsolder it and then solder a new battery. My soldering iron (and my shaky hands) will probably ruin the board. I ruined the battery connection anyway, the steel that was soldered to the battery was cut off when I tried to push the battery out (not realizing fast enough that it was soldered and not just glued). All was not lost, since the smartphone works anyway without the internal cell (which was already dead). I reassembled the phone back and just installed ClockSync from Google-Play to keep my time syncrhonized and exact. If I had a cell and a good soldering iron I could have tried more, but I think (in this situation) it is best to leave the phone until it is no longer usable and buy a new one. This is a good lesson though, since I learned a little bit more about the electronics of a smartphone and I can do better next time I want to disassemble. This is how I learned to construct my own PC and repair laptops in the beginning.

Generalizing Galois Groups

$\newcommand{\N}{\mathbb{N}}
\newcommand\Q{\mathbb{Q}}
$
There were many attempt to generalize the notion of algebraic extension of fields to other (more general) categories. One of my favourite generalization is for the category of reduced commutative rings which was made popular by the likes of Edgar Enochs, Robert Raphael and Mel Hochster. An algebraic extension in this category is just an essential extension that is an integral extension. Why is integral extension alone not enough? One simple reason is because one can never end an integral extension in this category (you can always find a strict integral extension of a reduced commutative unitary ring that is also reduced and commutative). The necessity of essential extension (essential extensions can be defined in a pure category theoretical way) allows a “largest” algebraic closure. In fact, Hochster has shown that any such reduced commutative unitary ring $A$ will have a largest essential and integral extension which is called the total integral closure of the ring. By largest we mean that for any essential and integral extension of $A$ there is an $A$-monomorphism from this extension to the total integral closure.

The total integral closure is also rightfully known as the algebraic closure of the ring. This name is justified considering the following characterization (made by Hochster):

Let $B$ be the total integral closure of $A$ then.
– All monic polynomials of degree $n\in\N$ with coefficients in $A$ are factored into $n$ linear polynomials with coefficients in $B$
– All residue domains with respect to ideals of $B$ are integrally closed in their algebraically closed field of fractions
(specifically all residue fields with respect to maximal ideals are algebraically closed)

This easily leads to the characterization of algebraically closed domains:
A domain is algebraically closed iff it is integrally closed and if its field of fraction is algebraically closed.

More was investigated by Raphael in the 90s who mostly looked at the von Neumann regular rings that are algebraically closed.

The next question one could pose is the following:
The fundamental groups in Galois theory enjoys the benefit of being finite. Can this be true for $A$-monomorphisms between an essential and integral extension of $A$ and its algebraic closure? I will give example for which we get infinitely such $A$-monomorphisms:

Let $A = \Q^\N$, then $\Q$ itself can be canonically be embedded (as a subring) of $A$ (namely taking the sequence for which all elements are equal). Then the polynomial $f:=(x^2-2)(x^2-3)$ is in $A[x]$ (we clearly abused notation here, $2$ (resp. $3$) is just the sequence of repeating $2$ (resp. $3$)). This polynomial has infinitely many zeros in the overring $B:=\Q(\sqrt{2},\sqrt{3})^\N$ of $A$ and clearly $B$ is both essential and integral extension of $A$. There are therefore infinitely many such zeros that can be mapped onto each other (product of the maps obtained from the usual Galois groups).

I do however believe that if we work with only one polynomial say $f\in A[x]$ then extending $A$ within $B$ such that it contains all the zeros of $f$ will give me a finitely generated $A$-module if $A$ is a Baer reduced commutative unitary ring. I will give a more detailed discussion about this in a next blog.

[1] E. Enochs, Totally Integrally Closed Rings. Proc. Amer. Math. Soc. 1968, Vol. 19, No. 3, p. 701-706.
[2] M. Hochster, Totally Integrally Closed Rings and Extremal Spaces. Pac. J. Math. 1969, Vol. 142, p. 767-779.
[3] R.M. Raphael, Algebraic Extensions of Commutative Regular Rings. Canad. J. Math. 1970, Vol. 22, p. 1133-1155.

The New Way to do Math

I recently realized that I have missed all along an ingenious way to do math. This might sound naive or even stupid, but I really never knew! My way of doing math had never any form of discipline. I always stupidly believed that chaos can always result into pattern. My table is always messy, my notes are always scattered and I jump from sheet to sheet when I scribble my ideas. An absolute heaven for the pure lover of chaos! Well, I discovered a new way to improve my math. And it’s not really to clean the mess I just described.. I have not yet become wise enough to realize that. I realize a form of scheduling when I do math, and surprisingly it is delivering me good results.

So here is my recipe which I think is working very nicely for me: I spend alternating days doing reading and then creating new math or questions from what I read without much relying on anything new. So for instance on a Monday I read a paper I really liked (“like” means three things for me: 1. I like the subject 2. The paper is not more than 15 pages long if the topic is terribly new 3. I do not need more than 5 references to learn new things needed to understand the paper). I cram and prep on this paper as if it was my exam almost the whole day. The next day which is a Tuesday, I don’t read almost anything at all. I begin imagining myself writing a new paper based on questions I ask about the paper I read the previous day. If the paper still remains interesting I continue .. otherwise I just ditch the whole thing all-together.. I keep on pushing myself asking new questions not in the paper or even if it was in the paper, I probably did not completely understand the paper thoroughly. In general, I get a whole new understanding of the whole topic and either I understand the paper much thoroughly or I am in fact even ready to publish an extension to the paper or even a new topic not directly related to the paper.

Graphing the Time Stamps

Remember here where I wrote about a python script that help me time stamp my activities. Now, I had an activity that I also wanted to graphically plot how much time I spent since I started on it. For this, I thought I make use of matplotlib and graph the progress graphically as a days-vs-(minutes spent/day) graph. I already had the csv file created by my time stamper and I wanted to use that data format to do this. But I wanted to have this more general, i.e. I wanted to just drag an drop any csv file created by the time stamping script and see the graph. To do this (I use windows!) I decided to allow window shell to drag and drop file into python scripts. This is done by adding to the windows registry as seen here:


Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Python.File\shellex\DropHandler]
@="{60254CA5-953B-11CF-8C96-00AA00B8708C}"

Now I can drag and drop csv files created by the stamper into this script to see the progress graph. The result would then look like so

work_graph

Enjoy!

Tagline