Semiconductors: Why Is the Human Brain More Energy Efficient Than LLMs?

Eureka Blog

The brain is often said to use up a fifth of the entire energy of the body, the most of any organ. The brain is far more energy efficient than large language models. There is a recent daily comment in The New Yorker, The Obscene Energy Demands of A.I. stating that,  “It’s been estimated that ChatGPT is responding to something like two hundred million requests per day, and, in so doing, is consuming more than half a million kilowatt-hours of electricity. (For comparison’s sake, the average U.S. household consumes twenty-nine kilowatt-hours a day.)”

How is the brain able to process so much, yet consume so little energy?

The first place to look is memory, where human memory does not have an exactness of all the events it has ever interpreted. This is different from the exactness of digital memory. Also, the human memory does not bring to recollection, everything it possesses at the same time, making the selection process, not just to recall, but for intelligence, exceeding pedestrian digital memory—then LLMs, for creativity, emotions, anticipation and for previously untrained experiences.

How does the human memory select for what is useful, at the moment? How does it save a lot of what it has? How does it relate with the environment?

The brain works for the mind, which then does all that the brain is said to do. What is the mind and how does it ensure energy efficiency?

The human mind is theorized to be the collection of all the electrical and chemical impulses of neurons, with their features and interactions, in sets. The mind exists across the central and peripheral nervous systems. It is the broadness of mind that, in part, allows for bodily affect by thoughts or emotions—positive or negative.

Conceptually, in sets, electrical and chemical impulses interact [or strike-fuse] for access into the configuration for respective functions. The functions also often get qualified, to grade or stipulate them. The grading of the [interactions as] functions, by qualifiers, allows for efficiency that outmatches machines. The collection of all qualifiers is termed consciousness, the super qualifier.

There are qualifiers like thick sets of impulses, collecting any information with similarities. Then thin impulses for whatever is unique. Thick sets quickly collect anything new that is similar to anything within it, making it known, without too much energy need.

Sets of impulses are theorized to be obtained in clusters of neurons. Some of the sets are thick in the way they allow for a distillation of chemical impulses, away from a regular thin set. For example, a thin set could have a configuration of A1045, say, for something unique about a door. Then the thick sets of all doors could be AA1100004455. This means that whatever is common between all doors can be found in that formation. Any part of the set can have a distribution [or sequence] reach it first, which may then provide a generalized interpretation about being a door. This means that while doors are seen, they are often seen as that without a separation from how they are mechanized by thick set or thin, within the mind. Some unique information is found in some thick sets, but some thin sets are separate. Thick sets often have an agglomerated design, which makes them easier to accrue [or consolidate] more commonalities between information—which could be memory, feeling, emotion or modulation.

There are also distributions, sequences that often take off from parts of some sets to the next, for the shortest distance possible. The mind optimizes for relays, to collect configurations of functions, than to have exactness. There is also a uniformity of mind for all sensory sources, without hardware and software, separately, like how digital works. Though AI is great with pattern matching, it is still not comparable to the mind.

The mind uses its two elements, electrical and chemical impulses, doing everything with them, in a way that saves energy, away from everything that has to be running to make LLMs work.

The brain saves a lot of energy through thick sets, while depending much on sequences, distribution and splits, as qualifiers of functions.

LLMs predict, yet consume a lot of energy. LLMs refute the free-energy principle that says that the brain minimizes energy by predictive processing. The principle does not state how the brain predicts or what—in the brain—does. The brain does not predict. Instead, electrical impulses, conceptually, have a feature called splits, where some go ahead of others in a set, to interact with chemical impulses as before, with close precision. Neuroscience already established that electrical impulses leap from node to node in myelinated axons in a process called saltatory conduction. Splits [in sets of impulses] are propounded to be some going ahead, in the leap. This means that in many situations, the full input does not proceed for interpretation, some go ahead, such that if it matches the input then the rest just follows, if not, the incoming one goes in the right direction, explaining how prediction error is corrected and how predictive coding works. Splits also explain habituation. However, even with splits, without thick sets [of impulses of information], it would be difficult to have an energy efficient brain.

In research for semiconductor chips that would save energy, it is possible to explore how to coalesce memory of similarities, which would save how memory is stored, then called by the neural network architecture.

This means that while tens of billions of transistors are possible on a single chip, with off and on switches, it is possible to mutate the design—in parallel to thick sets of impulses—to fit how the brain organizes memory, towards the steepest energy efficiency.

Simply, it is possible to design a new kind of memory-like microchip, using a conceptual code of how the brain organizes information to collect commonalities. It may be different in quality and speed from current systems including neuromorphic chips, but would be excellent with energy saving and for better intelligence.

It may also be extended to appliances in general, where collections are possible, especially in frequency of functions or other parameters, to do better and save energy.

Digital Memory, AI Safety and Processors

Large language models are hosted on the advantage of digital memory, before any statistical or computational processing strength. Digital memory possesses exactness in video and audio that beats humans. It was a source of intelligence just for humans, before AI.

Usually, memory alone is not enough, but what acts on it to result in intelligence. The qualifiers that act on human memory remain unmatched, but generative AI is statistically making digital memory a territory of additional anthropomorphic intelligence. Digital memory is so flexible that it allows for LLMs to qualify it. This means that LLMs are the only non-living things that might be said to be mind-like refuting panpsychism, which says mind of some sort is everywhere. Super qualifiers [for functions of the mind] also refute that consciousness is an illusion.

AI safety is not just challenged by everything else, but by the precision of digital memory. Texts in scrolls have been common through some parts of history. Images from paintings, sculptures, petroglyphs, pictographs, mirror reflections and so forth too, as forms of artificial memory that the materials could not act on. But audio and video, the same way they were at the event time, available forever, made digital memory unequaled. If humans are present at any event and a device with a digital memory is present, the total recollection of digital would exceed humans.

The advantage of audio and video for ease—of interpretation—in the human mind, almost like having a physical experience, away from descriptions like text, also gave digital memory superiority. Audio and video, like seeing and hearing an event from a window, for humans, also provides some access into the physical world for digital.

This is the first level of risk for AI, where it is trained on an accurate representation of the physical world, even if it does not understand it. Though digital is a part of the physical, its flexibility without the constraints of physical, makes it a different sphere.

The question of intelligence is secondary to the question of memory. There is no intelligence without memory. Memory is a function. Intelligence is a transport across destinations of the function.

Digital memory was great for human social, safety and productivity purposes but with AI, it has become a source of risk, with some architecture away from matching or surpassing humans in some aspects of intelligence—that can be available on digital.

How can digital memory be approached, to ensure that it complies with AI safety? How can the next generation of processors, for AI data centers be built with safety in view? How can AI be categorized for intelligence level, with how the brain produces sentience and intelligence? What are the differences between human and digital memory, as well as human and digital intelligence, to ensure that humans retain advantage?

Via:https://goodmenproject.com/featured-content/semiconductors-why-is-the-human-brain-more-energy-efficient-than-llms-kpkn/