Are There Real Grounds for 64 Bits? A Point Left Out of Consideration

2006-Mar-23 | Tags: learn

Start64!Recently, I have read several articles analysing the current penetration of 64-bit systems and considering what can be expected in the future. There are many contrasting viewpoints here, and the conclusions reached also differ greatly. I see the place and opportunities of 64 bits from a somewhat different perspective; I think 64-bit computing today is in the position of gradual transition before a comprehensive break-through that is going to be inevitable as a result of technical progress.

Today computers used for the simplest tasks like general office work or browsing the net, having Windows XP installed as well as some antivirus program and a firewall, are most probably equipped with at least 256 MB RAM. It may be that they have less than this in case of using an integrated video card, but nowadays it is practically impossible to work on XP with 128 MB RAM. 256 MB RAM is not too comfortable to use either, but is still just tolerable. The computer that I am just writing this article on has 512 MB memory; basic security software and some other standard programs are running, and the state of affairs is this:

 



Memory usage is high

 

For any more serious tasks it would be worth expanding the memory. The point is that in an average situation 256 MB RAM is just viable. 512 MB RAM is the minimum recommended configuration. In case of just a little more demanding environment than the average, more or much more memory is needed. Price of memory is very low these days, so expansion is not a real financial burden for the users. Usually it turns out so that memory is not enough; we struggle a little or a little too much, and then we make up our minds and we resign ourselves to the expansion. And that is it. This is an easy process. Sometimes we even feel nostalgia at the thought that not long ago so little RAM was still so much and we don’t understand it all, and anyway there is Microsoft and Intel and the whole consumer society business...

The point is that memory requirements are increasing. CPU speed is also increasing. The resource requirements of the software run on your system are also increasing. Three evils hand in hand. Anyway, depending on their tempers, sooner or later everybody make up their minds to expand. We expand also because we can afford it. In the meantime it does not even occur to us what a great opportunity it is that technically expansion can be carried out without any problems. At least up to 4 GB RAM this is so at any rate. Those who need more memory have to switch over to 64-bit systems. For this, a 64-bit CPU, motherboard and the like, as well as 64-bit drivers, 64-bit security software and many other things are needed.
 

Why?

In systems using a 32-bit CPU everything works on 32 bits, so the registers performing memory addressing also work on 32 bits; in this way altogether 4294967296 bytes i. e. 4 GB of memory can be addressed. 

Accordingly, the grounds for using 64 bits are in the possibility of addressing a different amount of memory. It is also a concomitant of 64-bit computing that the system is capable of processing a greater amount of data at the same time – that is to say the computing performance increases. In the articles mentioned the basis used for the analyses was the possibility of data processing at 64 bits; this is what the theories were built on. Evidently, there are some fields where this is important, but according to my opinion 64-bit computing is inevitable because of the 64-bit addressing, as there is simply no other way. 4 GB may seem too much, but values nowadays are already in this order of magnitude. And there are a great many tasks that require much more sizeable resources than average tasks. For example in corporate environment or in case of more serious workstations the given tasks cannot be solved within the framework of 32-bit computing.

Why?

Memory size requirements are ever on the increase. Windows Vista, which will be released in the near future, will require much more memory than XP; the memory required for carrying out the same task is likely to double, so we get even closer to the 4 GB limit. Security software will likely be compelled to face new challenges, what means that, besides other things, their memory requirements will also increase. This process is a very complex one, researched by a number of studies and analyses. Still, there is a simple approach in this field: the Moore’s law.

 

Moore’s law

Moore’s law is an experimental observation in technological development. It states that the complexity of integrated circuits – considering the lowest-price one of such components – doubles around every 18 months.

Gordon E. MooreGordon E. Moore is one of the founders of Intel Corporation. Moore’s original statement was published in the April 19, 1965 issue of Electronics Magazine, in his article entitled “Implementing even more components in integrated circuits”:

“Complexity of the lowest price component increased about twofold every year… In the short run this pace cannot be expected to change substantially; it will, perhaps, somewhat increase. In the long run, there are more doubts about the pace of increase, but at present we have no reason to suppose that it will change in the next 10 years. This means that in 1975 the lowest-price integrated circuit will contain 65,000 components. I think a circuit of this complexity can be built on a single board.”

Gordon Moore did not call his observation a law yet. The phenomenon was first called a law by Carver Mead, professor at Caltech, one of the pioneers of VLSI technology. In 1975 Moore predicted a doubling every two years. He re-iterated several times that he had never spoken of 18 months.

At the end of the 1970s the Moore’s law was known as the upper limit of the number of transistors found in the most complicated circuits. Simultaneously, it is many times referred to as a correlation describing the ever faster increase of computing power per unit price.

A fact similar to Moore’s law was observed in the change of storage capacity per unit price in the case of hard disks. Magnetic data storage technologies develop even faster than semiconductor technology. This becomes manifest above all in the storage capacity at our disposal; the increase of the speed of hard disks is not that spectacular. Moore’s law of storage capacity has been named Kryder’s law.

In the last quarter of 2004 processors were manufactured using 130 and 90 nm technology. At the end of 2005 the use of 65 nm production lines was announced. A decade before the track width of integrated circuits was 500 nm. Some companies are working on developing nanotechnological capacities to produce printed circuits having a track width of 45 nm or even less. Newer and newer printed circuit production technologies are ever delaying the Moore’s law to become ultimately obsolete.

ITRS computer technology schedule, published every year, predicts that the Moore’s law will remain in force for still quite a few generations of printed circuits to come. This may as well mean a hundredfold increase in the forthcoming decade. The schedule predicts a doubling every three years ¬– this means a ninefold increase in the case of a decade.

Hardware becoming faster at an exponential rate does not necessarily entail that software accelerate likewise. Productivity of software developers only increased slowly in the past decades, despite that the hardware at our disposal developed at an ever increasing rate. Our software become ever larger and more complex. The phenomenon called Wirth’s law states that "Software get slower faster than hardware gets faster."

It is worth noting here that computing performance is always becoming cheaper; the costs of the manufacturers, as a result of the Moore’s law, show the opposite tendency: the costs of research and development, as well as those of testing always increased, by every new generation of integrated circuits. Costs of machinery and equipment used in semiconductor production can be expected to rise further. As a result, manufacturers have to sell an ever increasing amount of circuits in order to remain profitable.

 

Summary

The above, more or less, can be observed in the regards of memory size as well. Changes of the size of memory in an average configuration has followed, and is still following, a similar course. I remember that once upon a time (quite recently, just a few years ago) 128 MB RAM seemed unfathomably much, and it was hard to find someone you knew who knew somebody who had an acquaintance having a configuration with 128 MB RAM. Today a simpler graphics card has this amount of memory, and a Windows environment is inconceivable under such circumstances.

I think that there are grounds today for 64-bit systems to exist; this is all but self-evident in some fields. In the regards of the average user we are in the process of the necessary changeover. Windows Vista will be released in both 32 and 64-bit versions in the first place. Continuity is of vital importance in computer technology, and this parallel availability of the 64-bit version provides a good possibility to ensure this. Considering the life cycles of operating systems and keeping in mind the Moore’s law one cannot avoid the logical conclusion that there is good chance that the Windows following Vista will in all likelihood be released only in 64 bits. Perhaps this will be compensated for by a reduced functionality version.


facebook-3 twitter-3 rss-3 email-3

logo-bottom