Originally posted January 17, 2021
Back in the ’80s, when we cooked popcorn and reheated coffee on the stove, and when “let’s go to the videotape!” actually involved videotape, we already had both private and public networks. We had external media, long- and short-term storage, and even L1 caches. It was glorious.
I spent much of my childhood transferring data from external media into local storage and into my L1 cache. I read anything I could get my hands on. I learned about history and biology and sociology and medicine and art and physics and music and poetry and responsibility and chemistry and psychology and just about everything by having it beamed to me through the air… and also by living it much more deeply than I tend to do today. When I needed information I used the telephone network to ask people who knew more about it than I did. If I needed a lot of information I’d ask my parents to take us to our Town Data Repository, where I could find almost anything I needed, neatly indexed by subject, author, and title. And if that turned up empty, I could ask them to access an even larger network, with even more data, and transfer the information to me.
But you asked how people worked back then.
The short answer is that we relied far more heavily on our own L1 caches and local processing units, and also on those we could network with either over the air or via telephone.
My first few engineering jobs involved building stand-alone industrial appliances based on various microcontrollers like Intel’s 8051.pdf and Motorola’s M68HC11E.pdf and MC68360UM.pdf. If you look at any of those PDFs, you’ll see it tells you literally everything you might need to know about the product’s physical characteristics, from its physical dimensions to the amount of power it uses to the purpose(s) of every pin. The manufacturer also provided a programming manual like the M68000PRM.pdf that explained the memory map, the purpose of every register and how to use it, and even the individual bits that create each of its instructions.
Most of us had our own personal storage area that housed all of this data, as well as the data for the other hardware and software components we used like the 16550.pdf serial transmitter and receiver. Anyone who was serious about the newfangled Internet thing also had a set of Stevens’ TCP/IP Illustrated data: Volume 1: The Protocols, Volume 2: The Implementation, and Volume 3: The One Everyone Forgets About.
They were all indexed and searchable, but lookups could be slow. To compensate for that, we created pointers to the sections we needed to reference most often.
Sometimes we got stuck!
As you can imagine, before Stack Overflow and Google, when we came upon a roadblock we had limited options. We could ask our colleagues for help, or call the manufacturer, or maybe post a question to one of the Usenet groups. But when you’re doing something that probably hasn’t been done before, or at least hasn’t been done very often, you’re pretty much on your own. So more often than not, we had to figure stuff out for ourselves. And that could be hard!
Here’s the thing, though:
I keep referring to our brains as an “L1 cache,” but only somewhat in jest. There’s a fairly deep analogy between the way we store and process information electronically and the way we do it with our own cranial meat. In general, the farther away the data is from a CPU, the slower the storage and the longer it takes to access it. Tape storage takes forever to access because it involves physically moving and inserting cartridges into mechanical hardware. Disks are faster, but they still take a relative eternity to read and write. DRAM (most computers’ “main memory”) is orders of magnitude faster, and SRAM (the guts of today’s solid-state drives—SSDs) is faster still. It’s yet faster to access memory that’s stored within the processor itself… but even that’s not fast enough for most processors to actually work with directly, to perform calculations that manipulate the bits. At least historically, the only memory a processor can use directly is a tiny set of registers that each store only a single piece of data: one number or letter, or maybe the color of a single pixel. If you want to process an encyclopedia’s worth of data, each and every byte must make its way through one or more registers inside the CPU before a program can operate on it. Everything else—however many terabytes or petabytes there might be—is just waiting.
If a program “knows” it’s going to need a certain set of data, it can move that data from slower memory into its onboard cache, or at least into faster memory, so the processor won’t need to wait very long for it. The bigger and faster the cache, the quicker the CPU can process a large set of data.
Our brains work almost exactly the same way.
Suppose you need to write a paper about a subject you know very little about. You’d probably start by doing some research. As you began you’d need to learn how to tell what was relevant and what you could ignore. You’d have to figure out which information was credible and which was inaccurate. You’d begin to assimilate this knowledge, building a context in which you could more effectively interpret new information. You’d recognize domain-specific language without needing to look up every word. You’d internalize the concepts. Ideas and phrases would become familiar, and you’d begin to get the “big picture.”
The key is that the more information you already know, the more effectively you can incorporate new information. You can make connections between seemingly disparate topics. You can compare and contrast. Most important, you can synthesize new information by considering your knowledge and experiences in previously untried combinations.
In some ways, workers in the ’80s had a huge advantage.
That’s because, without the Internet, we were forced to remember things, to be resourceful, to learn “the hard way” through many trials and lots of error. And because of that, we came away with a much deeper understanding than we get when we copy and paste something from a web site without taking the time to truly understand and internalize it.
Don’t get me wrong: there’s plenty that we can learn from the Internet, with all the world’s information literally at our fingertips. The Internet makes it possible to get so much more information, at virtually any depth we’d like, than was available in the ‘80s.
But it’s up to us to decide whether we’ll take advantage of it to learn, explore, and grow, or just to solve the problem of the moment and then move on.
No comments:
Post a Comment