Compiling Giac in Windows (Cygwin)

I tried many times to compile giac for windows. Giac is a computer algebra system that has the advantage of being very flexibly adaptable to many operating system (if I remember correctly these include: linux, macintosh, WinCE, Android, windows). I believe that the developer(s) (mainly Bernard Parisse) mostly develop for linux. But they made sure that this can be compiled in many other operating systems. They even provided makefiles adapted for different operating systems. As with many other software that claims universal compilability, one needs to still work to get this compiled in an operating system foreign to the developers.

In the past my compilation failed because of some dependencies that I did not understood or because of some flags that I did not set. So after many years, while I was idle in some math conference, I decided to write to the developers and ask their help. Initially I wanted to join their forum, which would allow easier communication between developers and users. It took a while for me to get into their forum. For some reasons, many computer algebra systems seem to have discussion forums that are dormant or very difficult to register. I do remember the same problem when I wanted to register to the Singular forum. After successfully joining the forum, it somehow exhausted me so I did not pursue the question of compiling giac in windows. This was not a wise decision, still after a year I saw the need to compile giac. Switching to linux was a last resort option for me. My developments are so deeply windows rooted that I had to do everything and port to linux and the workload could be much more than trying to find a way to compile giac.

After posting the question in their forum, I was surprised how helpful Bernard was in guiding me. I had three steps in mind if I wanted to have the giac library working in windows:

  1. successfully compile in Cygwin
  2. successfully compile in Mingw32
  3. Successfully compile a library for Visual Studio (2008)

I am glad to announce that I am satisfied with: compiling for cygwin and compiling with mingw32. My next and final goal is to try it in visual studio (which belongs to a separate blog.). I promised Bernard I will document this in a blog because there seems to be a lack of information for early developers using giac (the forum and some old site writes about it, but giac deserves much much more).

To start the walkthrough let me first write that many things (unknown to many people) use giac. Among many others softwares and hardwares the following depends on giac: HP Prime Calculator, TI NSpire calculator, Geogebra. I really believe this boils down to its flexibility playing well in different operating system and environments and its minimal system requirement (which I don’t know what is, but the fact that low spec calculators can work with it is amazing already).

Let us now start with the walkthrough for cygwin. Warning: since this is personalized some filenames in my tutorial uses my initial name (jose) but you can use whatever name you want.

  1.  Download the latest giac source code from
  2. Decompress the archive and go to its root directory and run ./configure. This will search installed components and and adjust the config.h in src folder accordingly (make sure you have installed at least gmp in cygwin)
  3. Compare the created config.h with config.h.win64 (in src directory) and check if there is anything wrong. For instance, in my case I specifically did not want to have FLTK (even though I had it already installed in cygwin, as this was only xcas related). So I did the following additional changes:
    • I removed all the FLTK options in config.h
    • I removed all NTL and INTL from the options

    Here, by “remove”, I mean I commented them out instead of trying to use #define to 0 because this wouldn’t work (for me). Giac checks predefinitions by #ifdef and not with #if

  4. In the source directory copy Makefile.win64 to Makefile.jose and edit Makefile.jose
  5. In Makefile.jose:
    • remove the capital letter objs in “OBJS = ” (they are related to GUI application, xcas, etc. which is not what I wanted to build. I want to build giac independent on unnecessary libraries)
    • remove Tmpl…, TmpFLG… and other capital letter obj files from “GIACOBJS=”
    • remove: -lintl.dll, -lintl, -lntl, -ftlk** (anything fltk related), libreadline.a, libhistory.a, libncurses.a, -lgsl, -lgslcblas, -llapack (I did not want to build with blas and lapack).
    • I left mpfr (and if you want you can put in libpari, but that will be discussed later) because I found multiprecision floating point kind of necessary. Do not remove gmp! (and if you want leave ntl but I did not experimented with that)
  6. Execute
    make -f Makefile.jose giac.dll
    in src directory

After you have done all this, you can execute a minimal program (Bernard provided me with a minimal program that try giac.dll. Thank you Bernard if you are reading this!). Namely save the following code as (I saved in the src directory, but you don’t need to be as dirty as me!):

#include <giac/config.h>
#include <giac/giac.h>

using namespace std;
using namespace giac;

gen pgcd(gen a,gen b){
  gen q,r;
  for (;b!=0;){
  return a;

int main(){
  cout << "Enter 2 integers ";
  gen a,b;
  cin >> a >> b;
  cout << pgcd(a,b) << endl;
  return 0;

Now in the same directory (asuming obj files are also there) run:

To compile

To link an create an executable called jose.exe
g++ -g -I. -DWIN32 -DHAVE_CONFIG_H -DIN_GIAC -DUSE_OPENGL32 -fno-strict-aliasing -DGIAC_GENERIC_CONSTANTS gl2ps.o jose.o -o jose giac.dll -mwindows -L/usr/local/lib /usr/lib/libreadline.a /usr/lib/libhistory.a /usr/lib/libncurses.a -lole32 -luuid -lcomctl32 -lwsock32 -lglu32 -lopengl32 -ldmoguids -lgsl -lgslcblas -lrt -lpthread -ldl -lmpfr -lgmp -lz

If things went well you can run the program which computes the gcd of two numbers. Here is the output:

Enter 2 integers 2 3

In a next blog I will explain how to compile it independent of cygwin (mingw32). I also will discuss some methods to optimize the output by dynamically linking to standard libraries.

Animating in Mathematica

M. gave us a lecture on how $S_4$ can be regarded as rigid transformation (rotations about the origin) of a cube centered at the origin. This was not straightforward for me to visualize. So ,after a long contemplation what to use: OpenGL/C++, Povray+Blender, Mathematica and its very useful animation features, I ended up deciding to use Mathematica. I had an intuition that Mathematica, albeit not being perfect in animation and visualization like Blender and Povray will provide for a very fast result (which at the moment was a priority for me). To visualize this you can use the following code:

g=Graphics3D[{Opacity[0.3], Cuboid[{-5, -5, -5}, {5, 5, 5}], 
  Opacity[1.0], Thick, Green, Line[{{-5, -5, -5}, {5, 5, 5}}], 
  Line[{{5, -5, -5}, {-5, 5, 5}}], Line[{{-5, 5, -5}, {5, -5, 5}}], 
  Line[{{-5, -5, 5}, {5, 5, -5}}], Black, 
  Text[Style[1, Large, Bold, Red], {5, 5, 5}],
  Text[Style[2, Large, Bold, Red], {5, -5, 5}],
  Text[Style[3, Large, Bold, Red], {-5, 5, 5}],
  Text[Style[4, Large, Bold, Red], {-5, -5, 5}]},Boxed->False,PlotRange->{{-9,9},{-9,9},{-9,9}}];
Animate[MapAt[Rotate[#, t,{0,5,5},{0,0,0}]&,g,{1}],{t,0,Pi},
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"3-Cycle (2,3,4)"]
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"4-Cycle (1,3,4,2)"]
  AnimationRunning->False,AnimationRepetitions->1,AnimationDirection->ForwardBackward,FrameLabel->"2 Transpositions (1,4)(2,3)"]

Yes combining the Animate in one single panel was rather a pain as I did not know how to change variables (axis and angle of rotation) all in one panel (I wasn’t able to do it with Buttons and Dynamic in Mathematica).

If you do not have Mathematica, you can still download the animation file here. Now I am proud to claim that I did a Mathematica animation on my own: Time from 0-knowledge to one animation = Roughly 1 hour. If this was Blender+Povray, although I have experience in working with them, I would still need much more time to model, texture and finally generate the animation. So my conclusion is: although the animations and features of Mathematica has many things left to be desired, it is probably the best choice if you want to animate a few mathematical object very fast on the fly. The 1hr work I invested, is now a 5min. work for any other remainder animation I want to make in Mathematica. If the animation I want to do would require much more time than this, e.g. complicated interactions with buttons, phong and radiosity etc., I would probably look at other options as well.

To save an avi you may use something like the following (notice: I have hidden sliders and panel for a better look of the video).

Manipulate[MapAt[Rotate[#, t,{0,5,5},{0,0,0}]&,g,{1}],{t,0,Pi,

This results in a very poor video compression (quality of videos were OK, but for 4sec. I got 35Mb of video). But that is easily remedied by using Handbrake (I do this if I don’t have too much time to play around more complicated tools) or VirtualDub (which I only use if I have a lot of time to invest.. which is sadly less often the case). I combined the four different videos showing a permutation from each conjugacy class of $S_4$. Here is the final video after compression (also a link was given earlier for you to download the video in case you do not have HTML5 support in your browser):

Mathematica and me

The last time I think I was serious with Mathematica was 2001? or maybe a few years earlier. I really cannot remember. But I left it and always looked at Maple for enlightenment (where I was/am not still good at). In the past when I looked at Mathematica I thought of it as a Maple similar and probably any amateur would still believe this to be. Both very clumsy to work with. Now, 2016, they differ so much and have advanced so much. One is good at one thing, the other at another. But this is not a debate which is better. This is a “tutorial” what you can do a little good with one. Today Mathematica won my heart (another day it could be Maple). Because the topic “Cylindrical Algebraic Decomposition” (which we in the real world, i.e. real algebraic geometry world, just simply call “cylindrical decomposition”) expresses the strength of Mathematica.

A Mathematica code is a bit different from a C++ code (or even Maple) that I am more used to. So let me list down some things I (think I) learned during the coarse of a few days looking at one or two codes:

  • f @ something means apply f to x
  • f @@ List[...] means change List to f
  • f @@@ List[List[...]] means List[f[...]]

The first problem that I had is the following: Given an real algebraic set defined by a polynomial function $f:\R^m\rightarrow \R$ , find the number of connected component of the complement of this set. Also find an “algorithm” that can tell if two elements in $\R^m$ belong in the same connected component.

For the sake of argument I took $m=2$ and variables to be $x$ and $y$. Then I wanted to find the connected components of all those $(x,y)$ for which $f(x,y)\neq 0$. To this I copied and modified a code from a mathematica stackexchange post and the modified code looks like this

Coco[eqns_, xrange_: {x, -1, 1}, yrange_: {y, -1, 1}, 
  plotit_: False] := 
 Module[{decomp, connected, regconn}, 
  regconn = 
   Resolve@Exists[{x, y}, (x | y) \[Element] Reals, 
       RegionIntersection[ImplicitRegion[#1, {x, y}], 
        RegionBoundary@ImplicitRegion[#2, {x, y}]], {x, y}]] &;
  decomp = 
   List @@ BooleanMinimize@CylindricalDecomposition[eqns, {x, y}];
  connected = 
   Or @@@ ConnectedComponents@
      UndirectedEdge @@@ 
       Select[Subsets[decomp, {2}], 
        regconn @@ # || regconn @@ Reverse@# &]];
  Print["number of connected components: ", Length@connected];
         RegionPlot[#, xrange, yrange, 
          PlotPoints -> 100] & /@ {decomp, connected})~
     Join~{FullSimplify[connected, (x | y) \[Element] Reals]}]]; 

As you can see the code just simplifies the regions defined by cylindrical algebraic decomposition using disjunctive normal form and then finds the regions of intersection by identifying each region as a vertex of a graph and then connecting the vertices if they have common intersections. Then Mathematicas (quit powerful) graph theory package can figure out the connected components of the graph and thus identify exactly the connected components of the Zariski open set defined by the complement of the hypersurface. This, I think, is an overkill. You really need not convert everything into a graph and use Mathematica’s connected component procedure for graph. You just need to use pigeon-hole principal by placing each region in a set containers and iteratively by order fill the set containers depending on common intersection of region or make a new container if there is no existing set container having region with common intersection. Since I was/am a lazy animal and the procedure was fast enough for my purpose, I left it as it was.

I tested this on a curve for which I knew how many connected components it’s complement have, namely the curve $y^2=x(x-1)(x-2)(x-3)(x-4)$ the plotof which looks like this:

After applying the Coco on the curve by typing (the three other parameters in Coco are optional, they are only used to plot the regions)

connected = Coco[y^2 != x(x-1)(x-2)(x-3)(x-4),{x,0,5},{y,-10,10},True];

I get the picture for the cylindrical algebraic decompositions

the picture of the different components

the number of components and the actual regions given (click to see in full glory)

The counting of component for this function, without plotting, took around 6.7 seconds in my not-so-fast laptop. Understandably, most of the time was sucked by the plotting. The plotting does not concern me a lot so I am happy with this. I think it can even be made faster (I predict, with correct coding and optimization you can squeeze it to 2 seconds).

Hopefully, in a future post I will show how this is done (with less pictures) when the hypersurface lies in a space with dimension greater than two (i.e. $m>2$).

multiple screensaver randomized at a time interval

Anyone who worked with linux might be more familiar with XScreensaver. It was only late when I got myself familiar with it. But I was also quite annoyed that I didn’t had this possibility of using multiple screensaver that changes at a fixed time interval, say every 1 minute, alone on the first call of the screensaver by a Windows operating system. I think there is the possibility for a software to change the default screensaver to another random screensaver, but what I wanted was XScreensaver style in windows. I wanted one call of screensaver uninterrupted by the user input to change to another screensaver while the computer is still idle after a specific time interval.. and I wanted randomness as well. I searched high and low and found ScreenMonkey from the WWW. I think it has some .NET dependencies. In any case, I was a bit annoyed that I had to pay for it in order to get rid of the monkey between screensavers. It didn’t bother me that much, but I was still challenged to solve the problem on my own by making my own multiple screensaver randomizer.

After a little work I created multiscr for Windows. I’m sure it is a bit buggy and might not fit to everybody’s liking. After all, I did it only to please myself and not for other peoples consumption. Nevertheless, I’d like to share this, in case someone is interested in using it: Click here to download. A few words of warning though:

  • The screensaver does not use Windows Registry to save user’s choice of screensaver. It saves everything in the %SYSTEMROOT% directory (usually in Windows/System32). So this directory should be writable to the program.
  • Time interval to switch between screensavers should not be less than 10 seconds
  • The program hooks to mouse and keyboard event to stop screensaver when the user makes a mouse or keyboard input (I’m doing everything manually instead of linking directly to Scrnsaver.lib which I personally find quite restrictive. This seems to work quite fine.)
  • The program simulates a mouse (1 pixel movement) event to force the stop of the chosen screensaver to switch to another random screensaver after the given time interval.

I can release the code for free access upon request. At the moment I am not doing so because I need to make the extra work of making the code “better readable”, removing/customizing some of my unnecessary internal codes (which isn’t necessary to share for this program), put in a visual studio project file for ease of compiling and putting some kind of text about license or sharing. At the moment, I’m just too lazy esp. if not a lot of people are interested in the code anyway. Send me a line if you are interested.

Generalizing Galois Theory for Commutative Rings - Part I

I am not sure why this idea has lost popularity after the 60’s. Papers appear about this, but more people seemed to be interested in it half a century ago. I would say, the idea started independently. On one hand we had a group of noncommutative algebraist and homological algebraist working on ideas that was probably once inspired from the category of fields. For instance separable algebras (separable fields) and von Neumann regular rings with its semiheriditary and quasi-inverse property (not far related to fields and product of fields) were, in my opinion, quit popular among noncommutative algebraist and homological algebraist. Then there were a group of people who purposely wanted to see ideas developed for Galois theory extended to rings. We have now definitions for separable ring extensions, splitting ring extensions and even algebraic extensions of rings (which is not the usual algebraic extension we would intuitively define). The last topic (algebraic extension) was studied by Borho, Enochs, Hochster and Raphael.

Having said that, I decided to add myself into the set of cooks (to make a better broth). Recently I proved the following for instance:

Proposition Let $A$ be a Baer ring and $B$ be its total integral closure (this is also called algebraic closure by Robert Raphael) and suppose $f\in A[x]$ is a non-zero monic polynomial over $A$. Consider the set of zeros $S$ of $f$ in $B$. Then $A[S]$ is a finitely generated module over $A$.

The proof is a bit technical and to share it I am going to give a lecture about it and write an article about this (to be continued…).

Edit: I have a lot of new results here but I decided not to write a second post about this yet. I think a pdf file is better for this kind of thing. My paper related to this topic and proofs can be found here.

Motorola Droid Mainboard

Short Hint: Do not disassemble your smartphone to replace the mother board battery. For a long story read the whole blog. To get to the final point, read the last paragraph of the blog…

Long time ago I bought my motorola droid I, second hand, for around €30. The smartphone has served me well even till now. In fact, it is still my first and only smartphone. Needless to say, slowly it is showing signs of its age. Now when I remove the phone battery from the back, the time is set to 01:00 and the date gets set to 01.01.1970. I figured that such smartphones should have an internal mainboard memory that will keep time setting intact even after removing the removable (or rather, “easily removable”) battery. In fact, it does. So I decided to disassemble my phone and replace this internal battery. If you observe carefully, the video where a person disassembles (I would say, rather unprofessionally) a motorola droid will show the internal battery:

I write “unprofessionally” because there is a video of a disassembly of a Samsung Galaxy that is so detailed and professionally done (with “correct” household equipments and in a correct way) that all other smartphone disassembly videos pales in comparison to this one:

Now, after disassembling. I noticed that I wasted my time doing so. There is indeed a button cell attached to the phone mainboard, I think it fits a CR2430 (almost 3mm in diameter). BUT.. the cell is soldered to the board. So no way of just slipping the battery away from the board. The solders are very fine, I would guess that i need a very fine and stable soldering machine to unsolder it and then solder a new battery. My soldering iron (and my shaky hands) will probably ruin the board. I ruined the battery connection anyway, the steel that was soldered to the battery was cut off when I tried to push the battery out (not realizing fast enough that it was soldered and not just glued). All was not lost, since the smartphone works anyway without the internal cell (which was already dead). I reassembled the phone back and just installed ClockSync from Google-Play to keep my time syncrhonized and exact. If I had a cell and a good soldering iron I could have tried more, but I think (in this situation) it is best to leave the phone until it is no longer usable and buy a new one. This is a good lesson though, since I learned a little bit more about the electronics of a smartphone and I can do better next time I want to disassemble. This is how I learned to construct my own PC and repair laptops in the beginning.

Generalizing Galois Groups

There were many attempt to generalize the notion of algebraic extension of fields to other (more general) categories. One of my favourite generalization is for the category of reduced commutative rings which was made popular by the likes of Edgar Enochs, Robert Raphael and Mel Hochster. An algebraic extension in this category is just an essential extension that is an integral extension. Why is integral extension alone not enough? One simple reason is because one can never end an integral extension in this category (you can always find a strict integral extension of a reduced commutative unitary ring that is also reduced and commutative). The necessity of essential extension (essential extensions can be defined in a pure category theoretical way) allows a “largest” algebraic closure. In fact, Hochster has shown that any such reduced commutative unitary ring $A$ will have a largest essential and integral extension which is called the total integral closure of the ring. By largest we mean that for any essential and integral extension of $A$ there is an $A$-monomorphism from this extension to the total integral closure.

The total integral closure is also rightfully known as the algebraic closure of the ring. This name is justified considering the following characterization (made by Hochster):

Let $B$ be the total integral closure of $A$ then.
– All monic polynomials of degree $n\in\N$ with coefficients in $A$ are factored into $n$ linear polynomials with coefficients in $B$
– All residue domains with respect to ideals of $B$ are integrally closed in their algebraically closed field of fractions
(specifically all residue fields with respect to maximal ideals are algebraically closed)

This easily leads to the characterization of algebraically closed domains:
A domain is algebraically closed iff it is integrally closed and if its field of fraction is algebraically closed.

More was investigated by Raphael in the 90s who mostly looked at the von Neumann regular rings that are algebraically closed.

The next question one could pose is the following:
The fundamental groups in Galois theory enjoys the benefit of being finite. Can this be true for $A$-monomorphisms between an essential and integral extension of $A$ and its algebraic closure? I will give example for which we get infinitely such $A$-monomorphisms:

Let $A = \Q^\N$, then $\Q$ itself can be canonically be embedded (as a subring) of $A$ (namely taking the sequence for which all elements are equal). Then the polynomial $f:=(x^2-2)(x^2-3)$ is in $A[x]$ (we clearly abused notation here, $2$ (resp. $3$) is just the sequence of repeating $2$ (resp. $3$)). This polynomial has infinitely many zeros in the overring $B:=\Q(\sqrt{2},\sqrt{3})^\N$ of $A$ and clearly $B$ is both essential and integral extension of $A$. There are therefore infinitely many such zeros that can be mapped onto each other (product of the maps obtained from the usual Galois groups).

I do however believe that if we work with only one polynomial say $f\in A[x]$ then extending $A$ within $B$ such that it contains all the zeros of $f$ will give me a finitely generated $A$-module if $A$ is a Baer reduced commutative unitary ring. I will give a more detailed discussion about this in a next blog.

[1] E. Enochs, Totally Integrally Closed Rings. Proc. Amer. Math. Soc. 1968, Vol. 19, No. 3, p. 701-706.
[2] M. Hochster, Totally Integrally Closed Rings and Extremal Spaces. Pac. J. Math. 1969, Vol. 142, p. 767-779.
[3] R.M. Raphael, Algebraic Extensions of Commutative Regular Rings. Canad. J. Math. 1970, Vol. 22, p. 1133-1155.

The New Way to do Math

I recently realized that I have missed all along an ingenious way to do math. This might sound naive or even stupid, but I really never knew! My way of doing math had never any form of discipline. I always stupidly believed that chaos can always result into pattern. My table is always messy, my notes are always scattered and I jump from sheet to sheet when I scribble my ideas. An absolute heaven for the pure lover of chaos! Well, I discovered a new way to improve my math. And it’s not really to clean the mess I just described.. I have not yet become wise enough to realize that. I realize a form of scheduling when I do math, and surprisingly it is delivering me good results.

So here is my recipe which I think is working very nicely for me: I spend alternating days doing reading and then creating new math or questions from what I read without much relying on anything new. So for instance on a Monday I read a paper I really liked (“like” means three things for me: 1. I like the subject 2. The paper is not more than 15 pages long if the topic is terribly new 3. I do not need more than 5 references to learn new things needed to understand the paper). I cram and prep on this paper as if it was my exam almost the whole day. The next day which is a Tuesday, I don’t read almost anything at all. I begin imagining myself writing a new paper based on questions I ask about the paper I read the previous day. If the paper still remains interesting I continue .. otherwise I just ditch the whole thing all-together.. I keep on pushing myself asking new questions not in the paper or even if it was in the paper, I probably did not completely understand the paper thoroughly. In general, I get a whole new understanding of the whole topic and either I understand the paper much thoroughly or I am in fact even ready to publish an extension to the paper or even a new topic not directly related to the paper.

Graphing the Time Stamps

Remember here where I wrote about a python script that help me time stamp my activities. Now, I had an activity that I also wanted to graphically plot how much time I spent since I started on it. For this, I thought I make use of matplotlib and graph the progress graphically as a days-vs-(minutes spent/day) graph. I already had the csv file created by my time stamper and I wanted to use that data format to do this. But I wanted to have this more general, i.e. I wanted to just drag an drop any csv file created by the time stamping script and see the graph. To do this (I use windows!) I decided to allow window shell to drag and drop file into python scripts. This is done by adding to the windows registry as seen here:

Windows Registry Editor Version 5.00


Now I can drag and drop csv files created by the stamper into this script to see the progress graph. The result would then look like so



Collatz conjecture reduced to residue class modulo $2^n$

As promised, I am going to present small part of my mathematical research in my blog and interested people can just download the paper for more detail. Often case, I just write down the abstract that is already part of my paper. I also will write down some history of how things were developed, which is otherwise not written in the paper or article.

I started a few months ago in the summer of 2014. Obviously, the $3n+1$ problem (or the Collatz conjecture) is not a field I have worked with before 2014. I do mostly algebraic geometry and commutative algebra. So I wanted something new. I personally thought that it would be easier to work with binary representations of the Collatz sequence and it turned out that, at least for me, I could understand the sequence better that way. I set myself that goal of looking for something new for at least a year until if things were not anymore promising I would just call an end to this research and start something else. Well at least I think I did find something new. Far from any form of solution to the conjecture or even something that might prove significant for the research community. It’s only significance is probably that it is a very easy to understand characterisation (or suficiency) for the Collatz conjecture to be true. Probably, after this or at least after I get this somehow published, I will not be doing much more and try to look at other things and then maybe every now and then take a glance at the Collatz conjecture again. I always need some change whenever I do something for a long time. If I do get lucky I might see something again, but there are no gaurantees. Well let me show the abstract of the paper:

Here we investigate the odd numbers in Collatz sequences (sequences arising from the $3n+1$ problem). We are especially interested in methods in binary number representations of the numbers in the sequence. In the first section, we show some results for odd Collatz sequences using mostly binary arithmetics. We see how some results become more obvious in binary arithmetic than in usual method of computing the Collatz sequence. In the second section of this paper we deal with some known results and show how we can use binary representation and OCS from the first section to prove some known results. We give a generalization of a result by Andaloro [1] and show a generalized sufficient condition for the Collatz conjecture to be true: If for a fixed natural number $n$, the Collatz conjecture holds for numbers congruent to $1$ modulo $2^n$, then the Collatz conjecture is true.

The paper thus provides a sequence of sufficiency set whose set-theoretic limit is the set $\{1\}$. Similar sequence of sufficiency set has been found before (the natural density approaches $0$ but the set-theoretic limit is not necessary the singleton containing $1$). I tend to think that this one is the simplest one. The paper in preprint form can be found here!

[1] P. Andaloro, On Total Stopping Times under $3x+1$ Iteration, Fibonacci Quarterly 2000, Vol. 38, No. 1, p. 73-78