I don’t mean this to be metaphorical, but maybe 10 or 20 years from now it will be. Computers and computing require memory. The reason is they need to keep quick calculations and values close at hand, because of communication speeds, because we need them to be quick and responsive, particularly for video displays and games, memory, or RAM, has to be physically close to us and inside our computers. We’re talking inches away from the devices and CPUs that use the RAM. In early times, memory was held by magnetic/conduction zones, in FIFO delay lines, and cathode ray tubes (interesting, since these used to be video display devices).
I have two goals here: 1) I want to talk about memory, how it affects computing in general, and how to keep it in mind and look out for situations where you’re short of it and 2) I want to talk about how I believe that like local storage (of files, etc.) has recently become more of a commodity that we can get more of on demand, I suspect memory may become that too (and I think it may already have done so in some senses).
Like I said above, memory or RAM (Random Access Memory) has been a feature of computing since the 1940s, and commercially available since the 1970s. The reason computers need it is that the CPU (Central Processing Unit) or main computer chip (and the other supporting chips) can be thought of as a bunch of switches. In different configurations, the switches come up with different calculations and different answers. But like with a scientific calculator, it helps to have a place to stow intermediate answers until the final result can be presented. And the more memory you have to work with, the more complex and speedy the calculations you can do. And essentially, every computational operation, every computerized display, every streamed video, every snapchat, tweet, and facebook post (and every character, every transmitted bit, every WiFi or cellular connection, and every moving, calculated or computer part of every little thing, in picoseconds, in 2017 and 2018, probably moving into femtoseconds soon) is represented in some memory somewhere, between you and as far out into the internet as your transaction goes. Fore more information on the development of memory technologies in computing (both memory and storage, the lines between which are constantly being blurred), try this excellent video from the Crash Course set of Computer Science videos.
And because we’re talking about transactions in picoseconds, and computers and mobile devices still signal with electronic signals, it’s useful to know that in 1 picosecond (still very fast for a modern computer), electronic signals can travel less than 1/64 of an inch (3.66 mm).
For an even less abstract example, the fastest gaming CPU right now is the Intel Core i9-7980XE. Its fastest virtual (Turboboost) frequency is 4.40 GHz. This translates to seconds (227 picoseconds) per cycle. An electronic signal will travel 2.683 inches (6.814 cm) in that amount of time. Now, the good news is that the zillions of actual operations that computers do are composed of 10s or 100s of these cycles, so we get a little more leeway, but we are getting close to the limits.
Anyhow, that all sums up to why, right now, every computing device we have holds some kind of RAM, and quite a lot of it, within the case.
Why so much? Basically because as our computing platforms and our interconnected Internet have matured, software developers, engineers, and all the people whose jobs sit between us and our content (many, many people – think movie credits many) have demanded more slickness, rapidity, and elegance in our end-user experiences, and every bit of that costs in CPU cycles and, by extension, memory.
Consider this blog entry you are reading right now. I’m going to assume you’re reading it on the https://geekblog.malcolmgin.com/ server/service, and not on some RSS newsreader, via Facebook, or some other News or Web Page saver, but straight up through a web browser on a computer, over the Internet. There are many layers involved. It starts with me writing the post and posting it, which I do over a web browser connected via the Internet from my home network to the https://geekblog.malcolmgin.com/. Once I post it, you can read it.
Say you are logged into Facebook through a browser and you see the post notification come across your Facebook new feed. Interested, you click on the post link and it loads the entry on https://geekblog.malcolmgin.com/.
At every level of transaction, there is a CPU asking its RAM to store something or to retrieve something already stored. There are calculations with intermediate steps where RAM stores and presents data to the CPU. There are network transactions where values stored in RAM are sent across network links to other machines and CPUs and other RAM. RAM figures in almost every step in some way, as a CPU puts some value aside for a few picoseconds so it can do something else, then picks it up again if need be. There is RAM (called cache) adjacent to every level of the CPU, there is RAM on the motherboard’s bus, there is RAM assisting network cards (WiFi and wired), RAM in network hardware (even hubs and switches), and RAM assisting the video displays.
Most modern operating systems have enough RAM to do whatever they need to do. They’re specifically designed to work within a minimum amount of RAM, and most hardware vendors go out of their way to make sure that device owners obtain or have enough RAM to do what they need to do (a notable exception here is games playing performance for desktops and laptops, because RAM shortfalls can drive sales of new computers and platforms). Because without enough RAM, the user experience of a computing system is poor. Processes can halt or slow down. Video displays can get jittery and the frame rate drops. Just like needing hot and cold running Internet, modern computer systems need an excess of RAM. Without it, things slow down and stop, or simply fail, and you see error screens and messages while doing your computing. Reliability for single transactions drops and you feel the CPU’s pain by seeing your transactions intermittently fail.
Recently, I noticed that this very blog server seemed to be having issues with transactions and (image) uploads failing, so I contacted my support folks, and they told me they were seeing out of memory errors for my userid on the shared hosting server I use.
Now, I am not running servers or services that are traditionally associated with high memory use. My servers consist of a few Wikimedia wikis, some WordPress blogs, and some static HTML and image web servers, and some very light use support services, like an OpenID server. Given that, I was perplexed, but I’d forgotten about the fact that multiple services add up, and that I was running all of these services under a single user account. My support folks let me know that the RAM limits were per user, so the first level of fix here was to break up each service under its own user account in the shared host. Once I did this (though to be fair, I have a few old blog servers still running, along with this one, under a single user account – fixing that is a longer term project since they’re all under the same subdomain), the issues with this blog cleared up.
Obviously for a single computer with a single simultaneous user, like a desktop, laptop, notebook, or tablet, this kind of fix won’t help, since most processes running to support you (the user) are running under either the system or your own permissions. You don’t have a much larger pool of memory just waiting to be divvied up among other users. But you still have options. For desktops and some laptops/notebooks, you can open up the case and add more RAM. This is limited by the maximum amount of RAM your motherboard and CPU can use (or “address”, as in addressing an audience), and by the physical space and capacity (or “slots”) provided by your motherboard. But in some cases not only is adding more memory sticks an option, but you may be able to upgrade the capacity of the sticks already installed. Consult an expert on this – upgrades for memory can be tricky and full of hidden gotchas (e.g. sometimes you have to upgrade stick capacity in pairs, there’s speed of access to factor in, as well as bus speed, etc.)
Another stopgap fix is to change the amount of virtual memory your operating system is using. This applies more to Windows and some Linux distributions. As far as I know, Apple’s OS or macOs handles this internally. But most modern operating systems can use part of the hard drive or SSD storage for long-term storage of RAM-like memory, and it’s often called “virtual memory”. It’s a slow tech, because hard drive and SSD storage can be a lot slower than RAM. But if the OS optimizes its use, it can clear the RAM out of relatively static data and stick it on the hard drive, letting the real RAM focus on the really transient, ephemeral data.
Some mobile platforms, like tablets and cell phones, have memory (or at least slower types of memory) that are also expandable. Many can take MicroSD memory and use it for a mix of expanding long-term storage and possibly even for expanding virtual memory. Some can even use external drive technology to do it.
At some point, though, with modern computing architectures, you get to a point where compensating for shortages of RAM means you simply have to upgrade the entire device, so you can get to a starting point where there is no RAM shortage.
Right now, we are at a point where tablets and cell phones have memory capacities of around 128 GB – 256 GB (note, for convenience, I’ll stick to manufacturers’ use of Gigabytes here, though what we really should mean is about 120 GiB – 240 GiB – see discussion of the Gibibyte for more information – and I will rant about it later, I’m sure). The reason I note this here is that it was around this capacity for longer term storage (spinning hard disk drives and large SSDs) where storage became a commodity. For purposes of large scale commercial businesses, especially those using “cloud storage” or internet-based computing services, for longer term data storage, the tech community started transitioning from having to build their own storage racks (where they had to design and budget for data-center racks of single drives to handle their storage needs) to just being able to buy storage by the GB or the TB, for USD cents on the GB.
For reasons I discuss above, this is going to be tricky for RAM. For long-term storage, where the cloud only stores the data and you retrieve it over the Internet, the retrieval speed is measured in milliseconds (at least 6 orders of magnitude slower than RAM speeds). It is true, though, that for cloud-based server farms, the RAM can be in close proximity to the server resources. For example if you set up a server farm on Amazon S3, Amazon servers will work in close proximity to Amazon storage facilities, so the physical distance limitations will be reduced. I think if we commodify RAM, it’ll have to be in partnership with the commoditization of servers, containers and other computing entities and the like.
To be fair, a lot of this has already started, at least in server-land, where companies and startups do most of their work. Amazon has started selling services like Amazon Elastic Compute Cloud. Google has a similar offering, and there are ways we can do this – where we move computing to gargantuan server centers and where our computers in our laps or on our desks really just communicate with those servers wherever they live. But we’re not quite there. Even Amazon’s service, still requires a good guess at what the servers will need. I’m thinking that within a decade or so, depending on what we decide to do, depending on how the market breaks, depending on some fundamental architectural decisions we make, and depending on how the Internet grows, we may soon be able to treat RAM, like we have recently done for storage, as a commodity, where if we want more RAM, we pay for it and that’s it. We don’t have to do any other figuring or installation, we’ll just have it.
Cross fingers! (Or, you know, wires, like in magnetic core memory.)