Thinking, Computing, and Improving Both

By Paul VanRaden

2025

 

Introduction

Thinking lets humans understand, plan, and decide what to do instead of relying on instinct alone as other animals do. Computing now does many tasks for humans and helps us better understand, plan, and decide than our ancestors did. We now hold computers in our hands, use huge computer clusters far away, and interact with Artificial Intelligence (AI) using methods that few people understand. Thinking is different than computing but studying computers may help our brains think more clearly. Also, learning how a person’s brain organizes its thoughts may help AI to guide us better.

After 40 years of writing computer programs to process large, complex datasets, I wondered if directly comparing thinking to computing could help us discover what each does best, how they interact, and how to improve both. Your brain may think more clearly and more reliably using your neurons designed over millions of years of selection instead of using AI based on silicon chips invented the same year I was born – 1960. Young people who grew up with computers may like to learn how previous generations did anything useful with few or no computers. Before 1960, computing was more of a “hands on” experience as shown in The Imitation Game excellent movie about code breaking during World War II. Today’s computer algorithms and large datasets may ‘imagine’ an answer, but your lifetime of real-world experience helps you understand what really works.

 

Topics

Thinking

Computing

Artificial intelligence

Connections and ideas

Sorting

Paging and memory management

Linked lists

Parallel processing

Multitasking

Large datasets

Data storage, access, and cost

Cause and effect

Probability and Bayes theorem

Interacting with AI

Background and advice

Reviewer comment

 

Thinking

To think, you need no extra hardware, only your brain and its 8 or 9 billion neurons. You have more than twice the number of neurons of any ape but fewer neurons than elephants or some whales. To think clearly, your neurons need to first make sense of inputs from your 5 senses: sight, sound, touch, taste, smell. Thinking without acting is very useful for practicing, but ideally your thoughts can lead to actions such as doing each day’s tasks or improving your own or others’ lives.

You also need no extra hardware to act on your thoughts. You can talk or smile or blink or move your fingers or hands or feet using your muscles. But sometimes extra hardware can help your senses to get more or better inputs or help your muscles send your actions further or faster. You can record your thoughts and store your words on paper using crayons or pencil or pen or store them in a report by typing on a keyboard. Voice recorders and cameras can also store your sounds and actions for wider or later use. The internet can send your thoughts and show your actions to anybody in the world interested who also has electricity, a computer or smart phone, and internet access that is not blocked by a government that controls what they see.

 

Computing

         Computers always have both hardware and software. Hardware refers to the parts that do the calculations or store the inputs and the results. Software refers to the instructions that tell the hardware what to do. Software is usually now in electronic files that are easy to transfer, but when I started computing at University of Illinois in 1980, the software instructions were still in a series of holes we punched into a deck of paper cards. Computers with different hardware should give you the same answer if they both run a copy of the same software and access the same input data. Computer results are consistent but can be wrong if the software is not designed well or the stored data do not answer the question.

Brains have only hardware and no software. You cannot copy your internal hardware or the methods you use for thinking directly to any other person. Each brain is unique and each person may answer the same question differently based on your own lived experience, stored data, and thought process. Our brains have first-hand experience from years of living in this world whereas AI only has second-hand knowledge which is not admissible in a court of law.

 

Artificial intelligence

My Agriculture Economics 436 instructor at Iowa State in 1983 taught us an early form of AI. His example used primitive AI instead of your own brain to schedule where trucks should go to deliver orders to scattered customers. He gave us several minutes to plan the best routes that 2 imaginary feed trucks should drive to go the fewest miles while delivering the amount of feed that each customer ordered. The next week after checking and tracing how far each student’s trucks traveled, he handed back our “quiz” results. Then he handed me a real quarter (25 US cents) and explained that my imaginary trucks were the only ones in the class that traveled fewer miles than the routes produced by AI. That 1983 lesson convinced the class that computer programs can do important business tasks better than most college students can.

Econometric statistics were probably my training closest to AI because in an economy, almost anything can impact anything else and in both directions. Causes and effects are not very clear. Genetic statistics were much easier because genes affect almost all of life but the process of living does not affect the genes except for a few rare mutations which may turn into cancer or other diseases. Those who learned AI tools sometimes got higher paying jobs such as from Facebook where AI may have larger benefits over standard statistics that model direct causes and effects.

 

Connections and ideas

         Reading this article may cause your brain to connect ideas about how to think clearly or compute efficiently that it previously may have stored separately. Those new connections will be stronger if you agree that thinking and computing are similar processes using different hardware. Based on your experiences, you may easily imagine further ideas or examples that I did not or could not think of.

We may imagine that Thomas Edison was thinking about a light bulb and then he reached over to the switch and turned it on. Many good ideas start out as new neural connections but have no impact because we do not have the time or the resources to develop our ideas into a working product. By sharing ideas, we keep them alive so that others with more time and energy might make them light up someday.

 

Sorting

Before electronic storage, having multiple copies of each document in differing sort orders could be helpful. For example, when I started at USDA in 1988, the letters we sent were copied 3 times and filed into 3 separate cabinets sorted by date, or alphabetically by subject, or alphabetically by person’s last name. That made finding the letters easier if you knew any of those 3 properties. Even after electronic storage, multiple copies in different sort orders were used for merging data types and in solving equations because computer tapes require processing each large file sequentially. Sorting became less important for us in 1993 with more memory, spinning disks, or later when solid state drives allowed faster access to each record.

Similarly, your brain may store many events in date order so that you can remember what happened first, perhaps to help determine cause and effect. Your brain may also link an event to one or more subjects and to those people most involved in that event. Your brain likely does not link subjects or people to the alphabet but rather to the nerve group where that subject and those people already are stored in your brain. Electronic instead of paper files have made computer searching much faster but our neurons still have the same speed as our ancestors and probably the same speed as apes.

 

Memory management and paging

Your brain uses many more neurons to store long term memories than to store or process new inputs from your senses each day. While you sleep, more important events of the day may be converted from short-term to long-term memories. Your neural connections will get stronger among people and events that happened together, grow weaker for people and events that you no longer see, and you will form new neural connections to help you remember new people or new events.

Computers also have different memory types, and short-term memory usually has the fastest recall. Fast computers and algorithms should put the data they need next into their fastest access memory using a process called paging. IBM invited and flew scientists with the most difficult problems to Los Angeles to test our programs on their newest computers in cooperation with their research team. While there in 1987, I convinced their team to use a new memory management strategy that they had been testing for computers with multiple users. Instead of recalling the pages needed for each program each time, the last used pages are left in memory until that program or another program needs to replace them. That memory management strategy is used in most computers today.

You might use similar memory management strategies. Some students like to cram before big tests by trying to load answers to the most likely questions into their quick-access memory so they do not need to dig for the answer deep in their longer-term memory. I used the opposite strategy and tried to organize my long-term memory efficiently so I would also remember the answers years later instead of only on test day. I tried to always get an extra hour of sleep the night before each big test.

When replying to emails, waiting a few hours is often a good strategy to give your long-term memories a chance to add to your reply. Pre-filling your short-term memory is also very useful, for example by rereading the list of participants right before going into a meeting. I always tried to do that to better recall names or faces of the people who would be there. Remote meetings now often include each person’s name under their face so that trick is no longer as useful. Pre-filling your memory is most useful right before presenting to an audience, especially pre-filling your opening lines, or if you have only a short time to discuss.

Recalling your address or your car’s license plate number from long-term storage may be easy if you saw those many times, but if a car hit yours today, you might forget its license plate unless you repeat the number continuously until you write it down or store it externally. Storing your long-term memories more efficiently can free up more neurons for short-term processing. Doing full-time research continuously for 40 years gradually shifts more of your neurons toward connecting new ideas instead of repeating the old ideas that your teachers taught you. Those old ideas may still be correct or need to be updated if they conflict with newer data or better ideas.

Physical reminders can also free more short-term memory for doing other tasks. While returning home from work each day I would try to remember what foods I ate in previous days to decide what food I should cook next. In about 2010 I listed my meals on a paper I kept on the kitchen countertop with a small cow pointed to the meal I had yesterday. Then each day just before starting to cook I moved the marker to today’s meal. This evening, that same paper, list, and cow will tell me what to cook next, as it has each day for the last 15 years with no short-term memory wasted on food decisions. Putting physical reminders right next to your outer door can also help you remember what you want to take out, etc.

 

Linked lists

Your brain and computers can use linked lists to remember things such as family members. The first item can link to the second, and the second to the third, etc. The list is complete when the next link is null or links back to the first to form a circular file. For example, my family of 8 is stored by linking my 2 parents to their first child Mark, linking Mark to the next born Miriam, linking Miriam to the next born Deb, etc. Linked lists are often more efficient than storing names in a rectangular table by not reserving empty space for the maximum family size possible. For cow pedigrees, we link each parent to all the daughters separately from all the sons because separate searches are faster if sex is already known and the families are very big with thousands of siblings.

 

Parallel processing

         From 1960 to about 2000, each program used a single processor. Computer designs focused on making those processors compute faster. More recent computer designs use many more mass-produced processors that are cheaper instead of faster. In earlier years, parallel processing simply chopped the input data into many pieces and gave one piece to each processor, but often with some loss of accuracy compared to seeing all the data at once. More recently, parallel processing can let one program manage all the data together and control many processors while each works on a subset of the problem or data.

          Early computers were mechanical and simpler. In my office I kept an abacus that my mother gave me and a slide rule that scientist Bob Miller gave me. He used it in his research at North Carolina State before being the first to use computers to process very large data files at USDA in the early 1960s. His programs and one computer replaced about 100 employees whose job title was “computer.” Switching from hand calculations to a computer in our laboratory was much like that in the movie Hidden Figures but happened a few years earlier because dairy geneticists had much more data than NASA had.

The abacus and slide rule reminded me to keep computing simple so it could be scaled up for large problems. In the 1990s I used to say that a billion Chinese people each with an abacus and using parallel processing could do our computing faster than we could with a computer, but that is no longer true due to much faster computing today. Also, Chinese people now have much better jobs to do than just using an abacus, and American people now have much better jobs to do than just using an adding machine or a slide rule to compute all day long.

 

Multitasking

At any one time, millions of neurons in your brain may be either growing or shrinking and gaining or losing connections to other neurons. But humans have only 1 set of language processing neurons and so we can only input or output one stream of information at a time. We may have many thoughts, but we can only express one at a time. We may have many books, but we can only read one book at a time. But you can multitask by filling different areas of your brain with useful information on different topics and then giving each area of your brain some hours, or days, or years, or decades to distill that information into a well-organized theory of how it all fits together.

When I started research for USDA in 1988, each Monday morning I would go to the office and ask myself, “What project should I work on this week?” Then I would spend that whole week on that task. During my last year at USDA in 2024 I could never spend a whole week or even a whole day on one task. I needed to read perhaps 100 emails in my In Box, answer at least 10 of them, and converse with my team of about 10 people. That usually took about half of each day. I kept a paper checklist with their names at the top and days of the month down the side. At the end of every day I put a check mark for those I spoke or wrote to, ensuring that I regularly interacted one on one with each employee every week.

Thinking in parallel was necessary to keep up with so many topics at once. But communicating directly with each member of the team was also important to hear the progress on their project and to give directions on what research might work next. We also had different team meetings at least 3 days each week to involve more researchers and help each to see the big picture. They would ask questions or contribute further ideas that people working directly on the task had not thought of. We got so many useful things done in recent years that it is hard to believe that Republicans in 2025 decided to destroy our research program.

 

Large datasets

Small datasets can be processed using personal computers or cell phones, but large datasets often require large, specialized computers. Staff of the Holstein Association USA in Vermont flew to the University of Wisconsin twice each year until 1988 carrying magnetic tapes full of millions of cow records and pedigrees. The biggest 2 computer users on campus at that time were researchers in cow genetics and in weather prediction. “Cloud computing” now lets your data travel to a large, remote, specialized computer without you having to carry it, but for large files, physical transfer may still be faster than internet transfer.

The value of large datasets is that they can answer questions more precisely or completely than any small, local dataset can. When you use a search engine, you hope that if the answer exists anywhere on the internet, the search will find it and show it to you. But finding the answer would take a long time if the whole world’s data was not already indexed in a massive file on some massive computer farm where your search question goes to find it. Even the most popular questions get pre-loaded into your search box when you start typing, and answers to those questions may already be computed and stored before you ask them.

 

Data storage, access, and cost

You can find better answers faster if reports on similar topics are grouped well. Old-fashioned libraries used to provide that service for us. Before the internet provided so much information so quickly, we used well organized paper files to do our research. In my office at USDA, my own paper files totaled about 50 linear feet of cabinet drawer space. Every time I looked up reports from my files and cabinets, which was several times per day, my brain also mapped that related information together internally. My neurons stored related articles together just like in my folders. By using this compact neural storage, I could remember which years and subjects my boss and my distinguished coworkers had published their reports better than they could. I am sure that using my well organized physical files greatly helped organize those same topics into my neurons.

My goal was to find most reports that I needed within 5 seconds. The files I used most often were in two 4-drawer cabinets on either side of my desk. To reach them I only had to swivel my chair to the right or left and lean forward (1 second). I kept those 8 drawers labelled and filled by major subject and could pull out the correct drawer >90% of the time (1 second). I always stored files together if their topics were most related and I always kept files in the same order so I could locate the correct file almost by muscle memory (1 second). My files were usually less than an inch (2.54 cm) thick, and within each file the most important report(s) that I wrote were in front to let me quickly retrieve my previous thoughts (1 second). Other reports on that same topic were carefully organized by subtopic or by date and paperclipped or binder clipped together within the folder. The subtopic or time group was labelled on a sticky note sticking up and visible to quickly find the correct paperclipped group (1 second). Finding the correct report within a paperclipped group might take a few seconds.

Thus, finding my own reports usually took 3 seconds but finding reports from other authors often took 5 or 10 seconds. My paper file system was competitive with today’s search strategies but could not keep up with the volume of new electronic articles that arrive today. My file system had a lasting impact on my brain that individual electronic searches today might not.

The information you may want to see may not be free. Often, to read the full story you need to subscribe to the newspaper or purchase the book. The U.S. government’s policy was often to provide our answers for free because taxpayers had already paid us. I will not further comment on how to value the data or information you need because I am not an expert on that topic.

Sizes of data files can sometimes be greatly reduced by referential compression and storing only differences from the reference instead of the complete data or picture. DNA sequence is often now stored as differences from the reference genome in CRAM files. Your brain may also do the same thing. For example, you may store what an average human face looks like and then store each person’s differences from that average face instead of storing each whole picture in your brain. Once you create a well-organized map of subjects in your brain, new things that you see may be stored only as differences from the average of what you saw before.

To reduce clutter in my brain, I tended to watch only documentaries and not science fiction or dramas because I wanted to solve only real-world problems and not imaginary problems. Along with filing visual inputs such as journal articles, I also filed audio inputs on topics I cared about together onto cassette tapes. You can read about and hear my song play lists in Rock and Roll Songs by Topic. Those had perhaps a bigger impact on my thinking. When I hear the end of one song on a cassette, I begin thinking of the next song in the series because those songs are stored together in my brain.

 

Cause and effect

Data files can help us understand which events are correlated but not always if one event causes another. Geneticist Sewall Wright of our USDA laboratory in Beltsville, MD first published about cause and effect in 1921. For example, if you collect data on cars driving down the road, the correlation of rain falling and windshield wipers wiping is almost 100%. An AI program might suggest that you can solve a drought by turning on your windshield wipers unless given the extra restriction of what is the cause and what is the effect.

 

Probability and Bayes theorem

         Your daily life and your long-term future can require knowing probabilities of what might happen that day or in future years. You may want to carry an umbrella or a jacket when probabilities of rain or a cold front are above a certain minimum. Those may already be computed and easily displayed on your phone or television or computer. Other probabilities specific to you and harder to compute, such as if you take a certain action how often will the result be a success?

Statisticians often use Bayes theorem to compute probabilities for future events. Your initial guess might be a 50% probability of making a sports team if the team needs 10 players but 20 players try out. Your probability may go up or down if your own performance in the tryout is better or poorer than others. Your brain does those calculations all the time to help decide if you should focus on one project or another. Each day you should combine your prior beliefs with the new data to obtain posterior beliefs that are more accurate than your initial guess without the data.

Instead of trying to remember and recompute all the past data you ever observed, your brain may use a simple statistical trick. Each night as you sleep, your brain may combine your prior beliefs from yesterday with the new data from today to compute new prior beliefs to use tomorrow. Then it just needs to replace your past, less accurate beliefs with your new, more accurate beliefs. You might wake up tomorrow thinking, yes, I can make that team, or no, that sport is not for me. Your neurons may never really use Bayes’ theorem, but by updating the neural connections that quantify your beliefs, you may live a Bayesian life without ever thinking about the formal math of Reverend Thomas Bayes (1763). Most of your probability calculations may even happen while you sleep.

 

Interacting with AI

Current AI tools can interpret a spoken question and give the most likely answer after consulting a huge database of relevant information. The user can try hard to ask the right questions about the most important problems or can use AI more for entertainment. The programs I wrote were almost always to predict the future rather than to summarize the past. That may be harder for AI to do. We used data from past generations of cows to predict performance for each calf in the next generation. That requires massive, specialized data files that AI does not have access to. We understood exactly what data our predictions were predicted from. Computers can calculate predictions, but currently AI may do a great job only of summarizing public data from the past.

 

Background and advice

For 40 continuous years I thought full time about how to compute. Computational strategies for estimation of variance components was the title of my 1986 PhD thesis at Iowa State. That topic is still an important and we still use that computer code from 1986 in research today. Those same programs helped IBM improve memory management in 1987 for the computers they designed. My most important paper Efficient methods to compute genomic predictions in 2008 was also about how to compute.

I started thinking more about how to think in 2007 when I started coding for parallel processing. Your neurons do not think in one straight line to solve a problem like computer programs did decades ago while reading data from a long, magnetic tape. Thinking in parallel and computing in parallel seemed to go together because neurons in many areas of your brain can cooperate to find an answer, like many processors cooperating by each doing some of the work so that even a big job can get done quickly. Writing this report took less than a week because I must have already thought a lot about how thinking compares to computing.

Parallel computing and parallel thinking are both harder to do than simple math problems where you take x plus y, divide by z, and square the result, or tasks like that. But parallel thinking is not as hard to do as designing an AI program to be as intelligent as you are.

Computers can process much more data much faster than you can, but they do not have the instinct and experience that your brain has for deciding what you should do next. If you are employed, getting your work done and getting your pay are usually the best use of your time. If you are a student, none of this material will be on your next test, so you should instead study the material that will be. If you are retired like me, then you can let your mind wander to new topics, like I just did. Computers still do not know how to let their minds wander. That is something humans still need to do.

You got all these sentences for free. I will not profit if you use or do not use my ideas. Some people may believe that free advice must be worthless, otherwise I would charge you for it. Other people believe that we should freely share ideas that make sense. For 37 years, US taxpayers paid me to think of and give away all my best ideas. You may believe that that this is an old, bad habit that I should give up and instead start selling my advice if it has any value. I believe that thinking is a skill that we each can improve by practicing it.

 

Reviewer comment

         An ‘anonymous’ reviewer sent me this comment: “Thank you for sharing great information and examples from your real experiences. I need some sleep now so my neurons can make new connections with all this information. Please continue sharing your knowledge so generously, as there are people like me who will greatly benefit from it.” [10:30pm, August 24, 2025]

 

References

How the Brain Stores and Retrieves Memories

How Memories Are Formed and Where They're Stored | Psychology Today

Human multitasking - Wikipedia

Computing - Wikipedia

Computation and Computational Thinking

Computational Thinking Is More about Thinking than Computing

Hidden Figures

The Imitation Game [1 hour 47 minute film]