The Google Search bar doesn’t feel like an artificial intelligence. No one speculates that it might soon become an artificial general intelligence (AGI) — an entity that is competitive across many domains to a human being.
But do you know many “generally intelligent” humans who can muster a decent translation into and out of 133 languages?
Or a co-worker who can perform mathematical operations instantly, know a good route between pretty much any two locations on the planet, and proffer a plausible answer to every question you might ask in less than a second?
Sure, these answers aren’t original, nor are they especially great. And they are increasingly smothered in ads that benefit the search bar’s parent company. But again, compare a top Google search result to your own knowledge on a random subject.
But why doesn’t Google Search feel like an artificial intelligence, while language models like ChatGPT — and especially the new GPT-4o — often do?
Here are three reasons — if they are accurate, then we can glimpse a wave that will be landing in the coming 12 months.
The simple, singular interface of ChatGPT makes you feel like the thing you are chatting to actually knows things.
First, Google Search clearly derives its answers from human sources (i.e., webpages). That doesn’t count as “knowing,” we chortle, even if the algorithms behind finding and delivering relevant results are fantastically sophisticated.
And second, even if it did “know things,” intelligence is about spotting patterns in what we know, drawing inferences and acting on them, not just “recalling” something.
I’ll come back to reasoning and taking actions in the world, but there is a final, crucial reason why no one considers the search bar to be intelligent — a reason that matters just as much, if not more, to our growing sense of language models being entities, rivals even, true AIs, and not just handy tools.
There has been no existential angst, and widespread job-replacement fears, about the Google Search bar because it is a portal rather than a personality. No one believes the Search bar is the thing providing the answer to your question; instead, it is speedily taking you to places that give you answers.
But chatbots don’t need to break the illusion by bringing you to different sites or citing sources, nor can they or anyone pinpoint the training data that drove the output they just gave you. The simple, singular interface of ChatGPT makes you feel like the thing you are chatting to actually knows things. For many, it now feels appropriate to say “ChatGPT believes that …,” while no one would say “Google Search believes …”
“The “Chat” part of ChatGPT was enough to transfix millions of people who had been uninterested in language models.”
The extent to which large language models like ChatGPT actually go beyond memorization of training data into basic reasoning is an article for another day, but even if they are just imitating the reasoning steps that they have “seen” in the training data, it sure feels like they are “thinking” about what you are saying.
Language models answer with personality, too. The ChatGPT that was unveiled in November 2022 thanks you for your question, admits mistakes, and can hold a back-and-forth conversation. OK, we know that it is not actually grateful for your question, and would admit to 2+2=5 if you push it hard enough, but the “Chat” part of ChatGPT was enough to transfix millions of people who had been uninterested in language models.
Responses don’t come back as a block, like Google Search results, but as a stream of text, as if they are being composed by someone typing with over-eager speed.
Yes, the models have, in the intervening 18 months, also gotten “more intelligent,” with GPT-4, Google Gemini, and Claude 3 Opus all vying for the title of “smartest model.” But the recent AI craze didn’t wait for this intelligence growth: ChatGPT got to 100 million users before the GPT-4 upgrade.
If I am right, it is the human-like, all-in-one nature of chatbots that is driving most of the fear and excitement around AI.
Of course, the professionals in my GenAI networking community focus mostly on how models can now process documents, analyze image inputs, generate catchy music, and absorb hundreds of thousands of words at once, at ever greater speeds.
But just as important for the public, in my opinion, has been that growing sense of “humanness,” with speech input and output added.
So I want you to pay attention to this more visceral shift, this coalescence of web-scale training data, tools, and abilities into an increasingly human-like conversational interface — the humanifestation, if you will, of deep learning-derived artificial intelligence.
If I am right, it is the human-like, all-in-one nature of chatbots that is driving most of the fear and excitement around AI and fueling the idea that we are birthing a new digital species.
Hence the unprecedented interest in guessing timelines for the creation of AGI (most predictions falling between 2025-2030, if you’re wondering), as well as frustrations over that term’s growing popularity.
This would also explain the popularity of Character.AI, an app with tens of millions of users who spend hours interacting with roleplaying bots. Users are suspending disbelief, at scale, and some are even falling in love.
We already have the technology for AI models to have video avatars.
If it is the ersatz humanity of models that is their superpower, rather than their benchmark scores, then we may be able to anticipate a second wave of AI-mania, due in the coming months.
Models will soon have video avatars. The technology is there, with lip-syncing, facial expressions, and text-to-speech getting more life-like and generated in real time. Models can even analyze the emotions of your voice and adapt their speech to your mood.
Imagine video-calling GPT-5, as soon as 2024 or early 2025. Your favorite avatar answers (of course, you speak on a first-name basis), picking up the conversation exactly where you left off.
It won’t hurt your perception of “general intelligence” that you would be chatting to a model that likely knows more about medicine than many doctors, more about the law than junior lawyers, and doubles as an effective counselor and decent financial advisor.
The condensed insights from trillions of words — public or otherwise — and billions of frames from video sources like YouTube, all presented in a simple, human form.
And unlike a perfectly passive search bar, these new models will also be trained from the ground up to take basic actions on your behalf, asking questions when they are not sure what to do. Not yet trusted to act on their own, but worthy assistants nonetheless.
If I am right, billions may soon feel the AGI.
I know, I know: this is mostly just putting a pretty, chatty face on the same technology, plus a bit of fine-tuning and finessing. But that’s what ChatGPT was, in friendly text form, to the GPT-3 base model that was released in 2020.
In November 2022, it was billed as a low-key research preview. And we all know the frenzy that came after.
If I am wrong, this new video interface will be a passing fad and cause little new interest or demand. AGI will be a term that slips into obscurity, and growing debates over its technical definition will fizzle out. Instead, people will simply await smarter models — ones that can, for example, do their dishes or solve any real-world software engineering challenges they may face (for real this time).
But if I am right, billions may soon feel the AGI. The sense of AI being a “thing” rather than a “tool” will grow, with all the concomitant angst and addiction.
Sure, older hands will complain about its inaccuracies, and how the young spend all day on something else that will do them no good. Concerns about AI persuasion will only grow. Many will feel possessive about their models, and protective of the relationships they build with them. More debates will rage, without end, over AI consciousness. And then will come the AI adverts. Delivered by the model you have come to rely on, slipped into conversations in 2025.
Who knows, it may well be a Google model that first claims the mantle of an artificial general intelligence. Not only did Google invent the Transformer architecture behind ChatGPT, it pioneered the implementation of neural networks in Search … and that, handily enough, brings us back to the humble search bar, and what you might soon want to ask it.
“Hey Search, it’s me. You know how we’ve never before created something that resembles a new species — how do you think that’s gonna go for us?”
Philip is the creator of the AI Explained YouTube channel. He also runs AI Insiders, a community of more than 1,000 professionals working in generative AI across 30 industries and authors the newsletter Signal to Noise.
This article Why ChatGPT feels more “intelligent” than Google Search is featured on Big Think.
Read more
When water freezes, it transitions from a liquid phase to a solid phase, resulting in a drastic change in properties like density and volume. Phase transitions in water are so common most of us probably don’t even think about them, but phase transitions in novel materials or complex physical systems are an important area of study.
To fully understand these systems, scientists must be able to recognize phases and detect the transitions between. But how to quantify phase changes in an unknown system is often unclear, especially when data are scarce.
Researchers from MIT and the University of Basel in Switzerland applied generative artificial intelligence models to this problem, developing a new machine-learning framework that can automatically map out phase diagrams for novel physical systems.
Their physics-informed machine-learning approach is more efficient than laborious, manual techniques which rely on theoretical expertise. Importantly, because their approach leverages generative models, it does not require huge, labeled training datasets used in other machine-learning techniques.
Such a framework could help scientists investigate the thermodynamic properties of novel materials or detect entanglement in quantum systems, for instance. Ultimately, this technique could make it possible for scientists to discover unknown phases of matter autonomously.
“If you have a new system with fully unknown properties, how would you choose which observable quantity to study? The hope, at least with data-driven tools, is that you could scan large new systems in an automated way, and it will point you to important changes in the system. This might be a tool in the pipeline of automated scientific discovery of new, exotic properties of phases,” says Frank Schäfer, a postdoc in the Julia Lab in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-author of a paper on this approach.
Joining Schäfer on the paper are first author Julian Arnold, a graduate student at the University of Basel; Alan Edelman, applied mathematics professor in the Department of Mathematics and leader of the Julia Lab; and senior author Christoph Bruder, professor in the Department of Physics at the University of Basel. The research is published today in Physical Review Letters.
Detecting phase transitions using AI
While water transitioning to ice might be among the most obvious examples of a phase change, more exotic phase changes, like when a material transitions from being a normal conductor to a superconductor, are of keen interest to scientists.
These transitions can be detected by identifying an “order parameter,” a quantity that is important and expected to change. For instance, water freezes and transitions to a solid phase (ice) when its temperature drops below 0 degrees Celsius. In this case, an appropriate order parameter could be defined in terms of the proportion of water molecules that are part of the crystalline lattice versus those that remain in a disordered state.
In the past, researchers have relied on physics expertise to build phase diagrams manually, drawing on theoretical understanding to know which order parameters are important. Not only is this tedious for complex systems, and perhaps impossible for unknown systems with new behaviors, but it also introduces human bias into the solution.
More recently, researchers have begun using machine learning to build discriminative classifiers that can solve this task by learning to classify a measurement statistic as coming from a particular phase of the physical system, the same way such models classify an image as a cat or dog.
The MIT researchers demonstrated how generative models can be used to solve this classification task much more efficiently, and in a physics-informed manner.
The Julia Programming Language, a popular language for scientific computing that is also used in MIT’s introductory linear algebra classes, offers many tools that make it invaluable for constructing such generative models, Schäfer adds.
Generative models, like those that underlie ChatGPT and Dall-E, typically work by estimating the probability distribution of some data, which they use to generate new data points that fit the distribution (such as new cat images that are similar to existing cat images).
However, when simulations of a physical system using tried-and-true scientific techniques are available, researchers get a model of its probability distribution for free. This distribution describes the measurement statistics of the physical system.
A more knowledgeable model
The MIT team’s insight is that this probability distribution also defines a generative model upon which a classifier can be constructed. They plug the generative model into standard statistical formulas to directly construct a classifier instead of learning it from samples, as was done with discriminative approaches.
“This is a really nice way of incorporating something you know about your physical system deep inside your machine-learning scheme. It goes far beyond just performing feature engineering on your data samples or simple inductive biases,” Schäfer says.
This generative classifier can determine what phase the system is in given some parameter, like temperature or pressure. And because the researchers directly approximate the probability distributions underlying measurements from the physical system, the classifier has system knowledge.
This enables their method to perform better than other machine-learning techniques. And because it can work automatically without the need for extensive training, their approach significantly enhances the computational efficiency of identifying phase transitions.
At the end of the day, similar to how one might ask ChatGPT to solve a math problem, the researchers can ask the generative classifier questions like “does this sample belong to phase I or phase II?” or “was this sample generated at high temperature or low temperature?”
Scientists could also use this approach to solve different binary classification tasks in physical systems, possibly to detect entanglement in quantum systems (Is the state entangled or not?) or determine whether theory A or B is best suited to solve a particular problem. They could also use this approach to better understand and improve large language models like ChatGPT by identifying how certain parameters should be tuned so the chatbot gives the best outputs.
In the future, the researchers also want to study theoretical guarantees regarding how many measurements they would need to effectively detect phase transitions and estimate the amount of computation that would require.
This work was funded, in part, by the Swiss National Science Foundation, the MIT-Switzerland Lockheed Martin Seed Fund, and MIT International Science and Technology Initiatives.
This article Scientists use generative AI to answer complex questions in physics is featured on Big Think.
Read more
I recently got into death metal and wondered why so much of society and I were turned off by screaming and growling in singing. Why is that?– Seth, US
What I love about this question is the “I recently got into death metal” bit. I don’t know how it came about, but I imagine Seth walking past an open window, hearing some death metal, and thinking, “Hey, that’s my kind of jam.” I imagine Seth rushing back to his computer, finding the relevant subreddit, and staying up until the early hours flicking through YouTube’s not-entirely-legal catalog of copyrighted death metal. And now, here Seth is, wondering why he’s such an anomaly.
So, I suppose the first point to make is the obvious one: not everyone is turned off by screaming. In fact, Seth rather likes it. A great many people like it. But I will take the question as it was asked. Because I think Seth is right, enjoying the distorted clip of a death-metal scream is certainly a minority predilection. So much of society is turned off by screaming.
To answer his question, we’re going to have to unroll the busy scoresheets of aesthetics. We’re going to call upon the musical thoughts of two philosophers: Arthur Schopenhauer (via Richard Wagner) and Susanne Langer.
Schopenhauer: Cutting to the chaff
When you try to establish any kind of “philosophy of art,” you quickly come up against a big problem: art is a heck of a large category. How can you say anything meaningful about a subject that spans ballet, toilet graffiti, funeral orations, and hand puppetry? But the thing about the philosophy of music is that it’s different in one key respect: there is no surface matter. There is nothing that correlates with what the art is trying to represent.
For Arthur Schopenhauer, music is the highest of all art forms because it does not try to copy reality (as a movie might) or some phenomena in the world (such as an emotion). Music is its own category. It’s untrue to say that all art is representative of something in the world — what does a dance “represent”? — but all other art does require some gatekeeper. There is a medium required that shows us something, which then elicits some feeling or thought. But for Schopenhauer, music cuts out the middleman. There is no intermediary medium; no canvas is required.
Music and sound go to the heart of things. Schopenhauer thought that all art should reveal “the essential nature of external things,” but music does so immediately and intensely. Art is a transformative, often spiritual experience that brings the subject in touch with the “Will” underpinning all reality. Music is the purest, unfiltered form of that.
Langer: Burning bright and burning out.
We can turn to Wagner to bring Schopenhauer’s philosophy to bear on Seth’s question. Wagner frequently made use of a dramatic scream in his operas, either by his stage directions or by a high, screeching pitch on the score. Wagner did so, consciously trying to emulate Schopenhauer’s point. He believed a scream was a pure and primal expression of some core human truth. As he wrote, “Without any reasoning go-between, we understand the cry for help, the wail, the shout of joy, and straightaway answer it in its own tongue.” A scream has a power that every human, everywhere, can appreciate. There is nothing more human than a scream.
So, if we agree with Schopenhauer and Wagner, why is Seth right to say much of society is “turned off” by screaming? The answer might be because of the intensity of the experience.
Wagner knew that you couldn’t have an opera constantly peppered with screaming because it would start to lose its effect. An intense and honest explosion of feeling doesn’t happen all the time; the rarity of the scream makes it more powerful. In her 1953 book, Feeling and Form, the philosopher Susanne Langer argued that “music is a tonal analogue of emotional life.” Music is not scary, joyful, or sad; rather, they come embedded or embodied in the music.”
If, then, music symbolizes and expresses human emotions, then there’s only so much screaming one person can (usually) take. Screams represent distress, trauma, desperation, and panic, all of which are essential ingredients to a dramatic aesthetic experience. But you can’t experience them for too long without coming away damaged.
A different kind of listener
Music tastes are an odd thing. If you speak to someone who doesn’t like death metal, then it can seem truly bizarre that some people do. A song can be breathtaking art to one person and an ear-aching din to others. So why are people like Seth such an anomaly?
You probably don’t need a study to tell you (although there are many) that listening to someone screaming is a stressful experience. Cortisol levels shoot up, our hearts race, and our breathing quickens. And yet, other studies seem to show that listening to screaming metal is good for you; it alleviates the fear of death and can elicit feelings of “power, joy, peace, and wonder.”
No one knows why these differences exist, but I have a theory. It’s utterly unscientific, so please do write in if you disagree or have a counter-position. I believe that it’s all to do with how far you empathize with the death-metal screams. If you hear the screams as someone else telling an unsettling story about their own scream-worthy lives, then you might find them unenjoyable. I think Langer has a point, and listening to someone else’s wailing misery is not fun for any sustained period.
On the other hand, if you step inside the song and scream with the singer, I believe you’ll enjoy it more. In this case, it’s a kind of cathartic experience where you’re screaming away your own internal pain. You’re not listening to someone else in distress. You’re processing and cleansing your own distress.
So, Seth, my answer to your question is that when most people hear other people screaming, it’s a stressful, unenjoyable experience. But if you yourself are screaming with the singer (literally or not), then you will enjoy a cathartic release of some kind of pent-up emotion.
This article Everyday Philosophy: The hidden beauty of death-metal screaming is featured on Big Think.
Read more
Let our sponsor BetterHelp connect you to a therapist who can support you – all from the comfort of your own home. Visit https://betterhelp.com/bigthink and enjoy a special discount on your first month.
Three psychology and sociology experts, Robert Waldinger, Michael Slepian, and Richard Reeves come together in this compilation to discuss the psychology of loneliness and the way we can combat the “friendship recession.”
It’s 2024. It’s harder than ever to foster deep connections with others. Everyone feels like they’re missing out on friendships, and every day of isolation makes it even harder to escape the rut.
From keeping secrets to workism, these experts are unpacking why we feel lonely and suggesting the ways we can combat it. They encourage us to reach out, be vulnerable, and prioritize our relationships, reminding us that we are not alone in our struggle and that meaningful connections are within reach.
By following their advice, we can transform our social lives and experience the joy and fulfillment that come from true companionship. Understanding the root causes of our loneliness and actively working to build and maintain connections can help us break free from isolation and create a more connected, fulfilling life.
This video Loneliness: The silent killer, and how to beat it is featured on Big Think.
Read more
“Barnum statements” — named after 19th-century showman P.T. Barnum — are the kind of statements with which almost everyone agrees. They’re loved by soothsayers, clairvoyants, and mentalists the world over because they make it seem like you know what someone is like. Horoscopes are almost entirely written in Barnum statements.
“Avoid health risks today.”
“A project you’ve been working on will be coming to an end.”
These are examples of Barnum statements. Who wants to risk their health? Who isn’t working on some project?
One of the most common examples of a Barnum statement is: “Sometimes, you are sociable and outgoing. At other times, you like to be alone and enjoy your own company.” Well, yes. That applies to almost everyone living on our planet. It gets to the heart of one of the most fundamental paradoxes of the human condition: We are social animals who also like to hide in dens. We’re chittering meerkats one day and hibernating bears the next.
This is known as the “porcupine dilemma” and it comes to us via Arthur Schopenhauer and later by Sigmund Freud. It reveals something fundamental about the human condition and teaches us a great many things about how to live well.
The paradox of other people
Imagine two porcupines, cold and shivery, trying to keep warm on a frost-biting night. They huddle together to share their warmth, but as they do, they prick each other. Half of the evening is a spiky dance, as the porcupines oscillate between “shared warmth” and “painful pricks.”
For Schopenhauer, almost all our relationships are like this thorny tango. We want to be around other people. We like to laugh, gossip, and dance, and other people offer us consolation and love. And yet, people can be exhausting. They can be unbelievably annoying. Sometimes, we just want to retreat into the bubbled comfort of our home and talk to absolutely no one.
In Freud’s somewhat more fervid language, whenever we meet someone, we are immediately caught in tension. On the one hand, this person might be “a potential helper or sexual object,” but they could also be a rival or aggressor. In other words, we can have sex with or fight everyone we meet. Other people can be both a source of support and abuse, often within the same half hour.
The porcupine dilemma is a useful way to reflect on a relationship. Sometimes a relationship can feel overbearing and suffocating. It can tire you out. At other times, relationships can give light to life. For Schopenhauer and Freud, it’s all about the size of your spikes.
Lessons from the spiky fringe
If we accept that something like the porcupine dilemma is true for most people, what can we do about it? What can we learn from the spiky fringe? Here are three work-life suggestions:
Respect boundaries. In the workplace, much like our porcupines looking to get warm, we all need collaboration to foster teamwork and make our jobs generally easier. However, respecting personal boundaries is necessary if we are to avoid conflict. All teams have to walk the fine line between working closely and not cramping each other out. In 2018, Bernstein and Turban authored an interesting, counterintuitive paper which showed that open office layouts actually decreased the number of face-to-face interactions between colleagues. They theorized that the forced collaboration of an open office actually “triggered a natural human response to socially withdraw from officemates and interact instead over email and IM.” In other words, open offices force porcupines too close together, and they run away to safety.
Recognize remote space. Respecting boundaries is not just a thing for physical, brick-and-mortar offices but is equally true for remote working. An empty calendar isn’t an invitation to arrange a meeting. A green button and “online” sign don’t mean you need to send someone a message. Give people space. Let them do their job. In recent years, there have been a series of studies that show that while remote working can make people happier, it also depends on how well boundaries between work and home are managed, both by individuals and the organization.
Create a support bubble. Sometimes, despite the best office plans and workplace policies, a day at work can grind you down. It can feel like you’ve done nothing but talk in business jargon for eight hours straight. It can feel like you’ve been pricked and spiked enough for a year. In these moments, we need to establish a kind of recovery space. We need to decompress and balm our sore, swollen wounds. In an interview for Big Think+, the actor, writer, and director Jesse Eisenberg talks about how important his “bubble” is. As he puts it, “I really don’t like to talk about the industry that I’m in because I find, in some ways, you can never turn it off… And so I surround myself with people who do different things.” Set up a support bubble, which is not work-related at all. Talk to your loved ones about sports, movies, books, gardening, or the latest family gossip. It doesn’t matter what you talk about; turn off the business mind. Find people who are not porcupines.
This article How the “porcupine dilemma” teaches us to cooperate like champions at work is featured on Big Think.
Read more
In our experience, all physical systems eventually tend toward equilibrium: where entropy is maximized and no further energy can be extracted from it. This seems like an inevitable consequence of the second law of thermodynamics, and is absolute for any closed-and-isolated system. But our Universe is neither closed nor isolated, as it began from a hot and dense state and has been cooling, expanding, and clumping ever since the hot Big Bang. Even though its entropy has increased dramatically, parts of it like stars, planets, and even biological organisms, routinely extract energy and put it to work toward creating ordered systems. It seems like equilibrium, even 13.8 billion years later, is still very far away in a cosmic sense.
But will the Universe — the ultimate out-of-equilibrium system, in some sense — eventually reach equilibrium after all? That’s what James Calautti wants to know, asking:
“Is it possible that in the far distant future, after every single star has died, after the white dwarfs and neutron stars have faded, and the black holes have decayed, will the universe achieve a state of equilibrium?”
If certain assumptions hold true about our Universe, then yes, we will eventually achieve a state of pure equilibrium: where no further energy can be extracted to do work or enable reactions of any type. But that’s not necessarily how it’s going to shake out, even in the end. Here’s what we need to consider. Our Universe, from the hot Big Bang until the present day, underwent a huge amount of growth and evolution, and continues to do so. Our entire observable Universe was approximately the size of a modest boulder some 13.8 billion years ago, but has expanded to be ~46 billion light-years in radius today. The complex structure that has arisen must have grown from seed imperfections of at least ~0.003% of the average density early on, and has gone through phases where atomic nuclei, neutral atoms, and stars first formed.
Credit: NASA/CXC/M. Weiss
When the Big Bang first began, the Universe had practically no structure in it at all. No stars, no galaxies, no atoms, no atomic nuclei. It was hot, dense, and incredibly uniform: where the least dense regions were still ~99.99% as dense as the average ones and the most dense regions were only ~100.01% as dense as the average. Even though it was filled with ultra-relativistic quanta of radiation, plus particles of matter and antimatter, its entropy was around S = 1088 kB, where kB is Boltzmann’s constant. While 1088 may be a very large number, it’s not maximally large, especially not for the number of particles in the Universe.
Over time, as the Universe has cooled and gravitated, all sorts of structures have formed, from atomic nuclei to atoms to molecules, all the way up to planets, stars, stellar systems, galaxies, and clusters of galaxies embedded within a cosmic web. It’s as though the initially high energy state of the Universe, as the Universe expanded and cooled:
proceeded through a number of transitions,
where, from the hotter-and-denser conditions to the colder-and-sparser conditions, these transitions proceeded in an out-of-equilibrium fashion,
leading to the binding and formation of structure,
that seemingly created tiny “ordered” pockets at the expense of a larger-scale increase in “disorder,”
so that entropy increased tremendously over time. Today, the entropy of the Universe is about S = 10103 kB, or about 15 orders of magnitude (a factor of a quadrillion) greater than it was 13.8 billion years ago. This animation shows the transition between JWST’s near-infrared views, which showcase new stars and light-absorbing dust, versus the mid-infrared view, where warm dust is illuminated and stars are practically invisible. As stars form and gas collapses, entropy increases, but the release of energy can power reactions that require work, such as biological processes.
Credit: NASA, ESA, CSA, STScI, Webb ERO Production Team
As time continues to march on, all sorts of energy-emitting reactions will still occur. Neutral atoms will form from the ionized plasmas in interstellar space. Light atomic nuclei will fuse into heavy ones inside the cores of stars. Clouds of gas will gravitationally collapse into bound structures like stars and planets. And massive objects will collapse down to create black holes, among many other natural processes. All of these processes, as well as many others, emit energy, which allows work — the physicist term for energy-that-gets-put-to-use — to be performed. These processes all increase entropy on a global scale, but the emitted energy can be used to create regions that are more ordered, the same way that sunlight absorbed by photosynthetic organisms on Earth can be used to locally create order.
Nevertheless, the more energy-emitting reactions occur and the more time that passes, the greater the Universe’s entropy gets. As this occurs, there are now fewer opportunities for extracting energy from various processes. The Universe runs out of hydrogen, and fewer and fewer new stars form. Dark energy drives galaxy groups and clusters apart, and fewer cosmic mergers occur. More black holes form, and more compact masses get ejected into intergalactic space. Eventually, the entropy of the Universe starts to level off at a maximum value of around S = 10121 kB, which it will reach around ~1020 years from now. While today’s Universe might be littered with luminous objects, i.e., stars, many black holes exist alongside them as well. At present, there are an estimated 40 quintillion black holes within the observable Universe, but as time goes on and more stars die, the total amount of mass in black holes will increase. Only on extremely long timescales will black holes appreciably decay and turn back into radiation.
Credit: ESA/Hubble, N. Bartmann
At that point in the far future, all of the stars that exist today will have long since burned out. The future generations of stars that will have formed from their ashes and the remaining gas within galaxies will have burned out, too, leaving only stellar remnants behind: white dwarfs (which will have faded to black), neutron stars (which will have faded to black as well), black holes, and failed stars. On occasion, two failed stars will merge together and briefly create a low-mass red dwarf star: the last luminous lights present in our cosmos. When they burn out and fade to black as well, the last stellar lights will be extinguished.
Gravitational interactions will cause galactic remnants to decay and dissociate. Objects in orbital systems, like planets around stellar corpses, will see their orbits decay due to gravitational radiation, leading to inspirals and mergers. Black holes themselves will decay away through Hawking radiation, with stellar mass black holes taking ~1067 years to decay and the largest supermassive black holes taking upward of ~10100 years to decay away completely. Meanwhile, intergalactic space becomes sparser and sparser as dark energy continues to accelerate unbound objects away from one another. Eventually, there are no energy-producing sources left in the Universe, and the entropy of what remains is maximized. The event horizon of a black hole is a spherical or spheroidal region from which nothing, not even light, can escape. But outside the event horizon, the black hole is predicted to emit radiation. Hawking’s 1974 work was the first to demonstrate this, and it was arguably his greatest scientific achievement. Solar mass black holes will decay after 10^67 years, with more massive black holes surviving for longer.
Credit: NASA/Dana Berry, Skyworks Digital Inc.
When the final black hole in the Universe decays, releasing energy via Hawking radiation, the last source of electromagnetic energy will have been created. When the final gravitationally bound objects inspiral into one another and merge, the last source of gravitational wave energy will have been created. At last, there will be no more energy to extract from any natural, physical process anywhere in the Universe. That end state marks what we call the thermodynamic “heat death” of a system: in this case, the system is the entire Universe. That state, where:
no further energy can be extracted,
no more useful work can be done,
and where entropy has reached its absolute maximum and can increase no further,
is what truly represents an equilibrium state. So long as there are no further transitions that are going to occur — assuming dark energy is truly a cosmological constant and that there are no undiscovered fundamental forces, interactions, or reactions that can occur — that’s the end state we’re headed for. There will be isolated, stable clumps of dead matter that are present, all within an expanding, dark energy-dominated Universe, with a tiny, low-energy background of uniform radiation arising from dark energy’s presence (via Unruh radiation) with a minuscule temperature of around ~10-30 K. If one accelerates uniformly through even completely empty space, they will no longer find that space is empty, but rather will experience a bath of thermal radiation arising from quantum effects: the Unruh effect. The greater the magnitude of their acceleration, the greater the temperature of the radiation they will measure and experience. In the far future, if dark energy persists, then even an empty Universe will still have a background bath of extremely cold radiation, corresponding to a temperature of ~10^-30 K.
Credit: Pixel Matrix / Adobe Stock
At least, that’s what we can expect in a completely “vanilla” Universe. Our Universe, however, may not be completely vanilla at all, as by vanilla, we’re assuming a number of things:
that dark matter is stable, collisionless, and non-self-interacting,
that dark energy is a cosmological constant and will never change in its properties,
that protons are fundamentally stable and there are no super-heavy exotic bosons that will lead to its decay,
and that there are no new matter-or-energy transitions that are allowed to occur, such as vacuum decay.
None of these things are necessarily true, of course. They’re largely consistent with our best observations to date (although there are some hints that suggest otherwise), but we are compelled to keep an open mind as to what surprises still remain possible.
It’s possible that there will be additional sources of energy that we have not yet discovered that will reveal themselves down the line, delaying equilibrium. It’s possible that further decays are going to occur, either of normal matter, of dark matter, or even of dark energy and/or the quantum vacuum itself. And it’s also possible that dark energy will evolve in a way that will cause it to either strengthen over time, leading to a Big Rip scenario and/or a new inflationary epoch, or that will cause it to reverse sign and recollapse the Universe in a Big Crunch. We have to keep our minds open to these possibilities, even if there isn’t yet data to support their reality. Using DESI data release 1 data about baryon acoustic oscillations, a best-fit model for the dark energy equation of state, w, can be graphed over time. In contrast to the “vanilla” prediction that w = -1 always, allowing w to evolve with redshift favors an evolving model of dark energy, but that evolution is less significant when supernova (panel 2) and CMB data (panel 3) are also included.
Credit: R. Calderón et al., DESI collaboration, arXiv:2405.04216, 2024
Does dark energy evolve over time? Is it something other than a pure cosmological constant? If we look at the most recent data from DESI collaboration — the Dark Energy Spectroscopic Instrument collaboration — which has taken data from millions of galaxies across billions of light-years of space, it suggests (but does not prove) that dark energy used to be stronger, and “more negative” in the past, and is now weaker, and “less negative” than it used to be. A recent paper from collaboration members (as pointed out by Ciaran O’Hare) shows that, from DESI data alone, this evolution is strongly indicated, but that when you include supernova data and CMB data, the evidence for dark energy’s evolution weakens substantially.
If dark energy either:
does evolve,
can evolve,
or can transition to having a different value than it has today,
then it’s no longer necessarily the case that we’ll approach an equilibrium state as outlined in the “vanilla” Universe case. It’s possible that there will be more energy to extract, but it’s also possible that our modern picture for the far future of the Universe is incomplete and missing an important transition or event that will someday occur. If dark energy is non-constant or if it is possible to extract energy from the vacuum of space after all, then all bets are off. We normally conceive of our Universe as having emerged from a preceding period of cosmic inflation, with our Big Bang occurring where one region of inflating space ceased inflating and transitioned to being dominated by matter and radiation. It’s theoretically possible that instead we were birthed from the creation of a black hole in an earlier Universe, and that the black holes spawned in our Universe give rise to baby universes within them.
Credit: Kavli IMPU
It’s also possible that our current understanding of black holes is incomplete. It’s possible that what we perceive as black holes are merely gateways to a baby universe, but that the only way to know is to be “inside” that black hole, in which case we’ll never be able to know that from our perspective outside the black hole’s event horizon. It’s possible that every time a black hole is formed, a new baby universe is spawned, with someone inside that Universe perceiving their own inflationary event followed by a hot Big Bang. If this is actually true, then what we perceive as “an equilibrium state” in our own Universe is actually accompanied by an enormous suite of cosmic evolution in each of the baby universes that arise as “daughters” of our own.
There are many ways to envision our own Universe failing to come to equilibrium as well. Our far future, if dark energy is a constant, looks a lot like our earliest beginnings: dominated by empty space that’s expanding exponentially. This correspondence between the late-time dark energy state and the early-time inflationary state has only the magnitude and scale of inflation vs. dark energy as the major difference. It’s possible that some mechanism that hasn’t yet been discovered can result in some type of transition in the far future: one that leads to a new type of Big Bang (or its analogue) and then gives rise to all sorts of other non-equilibrium reactions. In a vacuum decay scenario, our Universe exists in a false minimum state, and it’s possible to arrive, either through quantum tunneling or an energetic kick that causes us to leave that state, to enter a true (or truer) vacuum state. If that happens anywhere, every bound structure, from protons on up, will be destroyed in a “bubble of destruction” propagating outward at the speed of light.
Credit: Darkspace.net forums
We can only draw conclusions about what we expect to occur, of course, based on what we know and observe today. We assume that our best observations, measurements, and theories about the Universe represent an accurate picture of reality, and all of our conclusions are reliant on how good those assumptions are. If general relativity and quantum field theory paint a complete picture of the forces in our Universe, and if dark matter (as a collisionless, non-self-interacting species of particle or fluid) and dark energy (as a cosmological constant) are the only exotic, non-Standard Model ingredients in our Universe, then yes: we can expect that our Universe will someday, in the far future, reach equilibrium.
In this scenario — where everything agrees with the Standard Models of cosmology and particle physics — then there will be no novel transitions or releases of energy in the far future beyond what’s already known: stellar death, gravitational wave emission, black hole decay via Hawking radiation, etc. We won’t live to see it, but the Universe will approach an equilibrium state, resulting in what we call the heat death of the Universe. It will become cold, isolated, empty, and filled with an extremely low-energy bath of radiation, from which no further energy can be extracted. As entropy is maximized, an equilibrium state will eventually be achieved. In the far future, there will be no more matter around black holes, but instead their emitted energy will be dominated by Hawking radiation, which will cause the size of the event horizon to shrink. After ~10^110 years have passed, even the most massive black holes will have decayed away completely. Once the last gravitational wave mergers have happened as well, no more energy will be released from any source, signaling the Universe’s heat death.
Credit: The EU’s Communicate Science
But we have to keep in mind that what we think is true today may turn out to be superseded, down the road, by a scientific truth that paints a picture that’s a better approximation of reality than we have today. Some of the things that appear true today:
that dark matter doesn’t bind together, interact, or release energy,
that dark energy doesn’t evolve and won’t decay,
that black holes simply collapse, live a long time, and then decay back into quanta via Hawking radiation,
and that there are no new fundamental interactions, forces, particles and/or reactions over what is known at present,
may turn out to be false when more knowledge is obtained.
We have to keep our minds open to all possibilities that haven’t yet been ruled out by observation and experiment, and must continue to challenge the assumptions that are too easy to let pass without scrutiny: that our best answer today will continue to be our best answer in the future. Our Universe appears headed toward equilibrium right now, but that doesn’t necessarily mean that’s where it will wind up. As Carl Sagan so wisely reminded us many years ago:
“At the heart of science is an essential balance between two seemingly contradictory attitudes: an openness to new ideas, no matter how bizarre or counterintuitive they may be, and the most ruthless skeptical scrutiny of all ideas, old and new. This is how deep truths are winnowed from deep nonsense.”
Our Universe may someday achieve a heat death and a state of maximum entropy: a true state of thermal equilibrium. But as long as there are new measurements to make and new questions to ask and explore about phenomena whose nature is not fully understood, there will always be the tantalizing possibility that there’s a whole lot more out there, just waiting to surprise us.
Send in your Ask Ethan questions to startswithabang at gmail dot com!
This article Ask Ethan: Will the Universe ever reach equilibrium? is featured on Big Think.
Read more
In 1780, hunched over a table at his home in London, Jeremy Bentham wrote the first lines of the first chapter of one of his most famous works. It read, “Nature has placed mankind under the governance of two sovereign masters: pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do.”
The British philosopher built an entire philosophy around this idea — that we are all motivated by pleasure and pain. Lucky for him, then, that almost all of the social sciences today agree with him. We are a hedonistic, happiness-seeking species who fear the pain-inflicting monsters of the world. Under all the pretense and bravado, we can be reduced to the simple push and pull mechanisms of the carrot and stick. After we get over the humbling and demoralizing simplicity of this, we can learn a few valuable lessons. We can game our own mechanisms and manipulate our Benthamite sovereigns. We can do anything. It’s all to do with something the writer George Mack called “Skinner’s Law.”
To help us make sense of this cheat code for human motivation, Big Think spoke with behavioral scientist Katy Milkman, the James G. Dinan Professor at The Wharton School of the University of Pennsylvania and author of How to Change: The Science of Getting from Where You Are to Where You Want to Be.
Commitment devices
According to Mack, Skinner’s Law is that when you are procrastinating or finding a task hard to get on with, you have two choices: either “make the pain of not doing it greater than the pain of doing it” or “make the pleasure of doing it greater than the pleasure of not doing it.” Since we know that we are only motivated by two things, we can use our higher, rational faculties to work that fact to our advantage. Skinner’s Law is named after B. F. Skinner, the American behaviorist who developed the idea of operant conditioning with his experiments on rats and pigeons. Skinner’s main argument was that human beings can, just as rats can, be conditioned to behave a certain way when given the correct pain-pleasure incentives.
The trick, then, is to set yourself pleasure rewards or pain punishments for doing (or not doing) a certain task. Essentially, there are two ways to motivate yourself: intrinsically or extrinsically. Intrinsic motivation is when you want to do something out of some inherent drive or desire. You might just want to eat pizza. Extrinsic motivation, though, is when you do something for some further benefit or reward (or to avoid a punishment). So, I don’t eat my pizza because I want to be trim for my beach holiday. The genius behind Skinner’s Law is that it turns our most powerful intrinsic motivator (pleasure) into an extrinsic reward.
Milkman told Big Think that these kinds of techniques are called commitment devices in the behavioral psychology literature. “It’s a tool for a person to self-motivate,” Milkman told us. “It’s something where you opt in to creating an extrinsic reward system.” She told us of a study involving smokers trying to quit. There are two groups, each given the same “standard smoking cessation products,” but one group is also told “they will lose the money if they fail a nicotine urine test in six months. What they found was that it increases quick rates by about 30%.”
The strongest master
We know, then, that if we want to succeed in any task, we need to set compelling commitment devices to keep us on track or to raise the stakes. The next question is: How can we make the best kind of commitment device? Is it better to promise yourself pleasure or to threaten yourself with pain?
It turns out that pain is by far the stronger motivator. As Milkman told us, “So [Daniel] Kahneman won the 2002 Nobel Prize for a theory called ‘Prospect Theory’. He and Amos [Tversky] showed that we find pain more motivating than equivalent pleasure. For example, if you lose $20, you’re more upset, and then if you find $20, you are happy. The pain outweighs the pleasure.”
Using all of this and Milkman’s advice, here are three practical bits of advice:
Make a wager: Find a friend or a family member, and bet them some sum of money or some item you value that you will do a certain thing. “Okay, Dad,” you might say, “if I’m not 5 pounds lighter by my birthday, you can have my PlayStation 5.” Ideally, place the “wager” in some kind of intermediary location so you can’t back out of the deal if you lose. Commitment devices only work if you can’t slip out of them.
Social accountability: Tell everyone you’re trying to do something. Tell them your target and your deadline. Keep people updated about your progress. This serves two purposes: to present the carrot and the stick. The carrot is that you get praise, support, and advice from your closest relatives. The stick is that you might be embarrassed or ashamed if you fail.
Avoid boredom: It’s easy to assume that boredom is some middle, neutral state somewhere between pleasure and pain—it’s just absence. But not according to Milkman. As she put it, “There is research to show a fundamental dislike of boredom. It causes us pain to be bored. In 2016, one study showed that when presented with a long, monotonous, and tedious film fragment, people would rather shock themselves than be bored. So, as a general rule, try to keep yourself busy. The devil makes work for idle hands; bored people do silly things.
This article Feeling unmotivated? Use “Skinner’s Law” to get yourself back on track is featured on Big Think.
Read more
“Currently, I’m greatly puzzled by the concept of competition as the driving force in this reality, which it of course is. Is war the logical conclusion of competition or is it just some sort of aberration? Why do some people become more creative when competition is removed from their personal reality, and others become decadent and degenerate? Is competition something to transcend or embrace? I do not yet know.”
These musings were penned in a letter to my old girlfriend Debbie, still living in the Kerista commune in San Francisco. In the mid-eighties, my thoughts seemed to bounce back and forth between two dramatically different worlds: on one hand, the burgeoning world of spiritual and esoteric exploration that captivated me, and on the other, my ongoing readings about business, economics, and capitalism. As I wrote to Debbie that day, I reflected at length on both.
In particular, I struggled to reconcile my direct experience of the unity or oneness of all reality—which I also felt in the playful, cooperative spirit of the business—with the focus on competition that I found in my readings of free market economists like Milton Friedman and Friedrich Hayek. Both felt like essential elements of life. Both felt positive to me. Competition could be generative. And yet competition, taken too far, also had a dark side, like war.
Cooperation could be generative too. Some days, the camaraderie among the team at Whole Foods Market felt intoxicating—everyone playing a choreographed role, moving through the stores like dancers in a chaotic but beautiful ballet of form and function. If ever there was an experience that made me believe in the power of cooperation, that was it. I loved the community and creative energy that we generated together. And yet, I was also acutely aware of how being an entrepreneur had channeled my own competitive instincts. I thrived on competition, loved to excel in sports and in business. I was driven to outdo our competitors, to win in the marketplace. The competitive drive was critical to our success, and my readings convinced me that it was essential to the entire capitalist enterprise.
I found it fascinating to read about economics and capitalism writ large, and then to reflect on how those processes were playing out in our still-young company. I was amazed at how a good idea could become a store, and a successful store could become several, turning into a real business, and a real business could grow to become a large company. And that chain of success could potentially change an entire industry. Of course, none of this would have been possible without a large degree of political and economic freedom—as Friedman pointed out again and again in his books. [Whole Foods Market precursor] Safer Way could never have gotten started if [former partner] Renee [Lawson Hardy] and I had not lived in a place where we had the freedom to start a new business. Thankfully, the barriers to entry were pretty minimal. “Wherever you have freedom, you have capitalism,” Friedman wrote, and I could see the truth in his words.
I found it fascinating to read about economics and capitalism writ large, and then to reflect on how those processes were playing out in our still-young company.
Capitalism, I began to understand, was not so much a top-down imposed system but, rather, an evolutionary result of letting people choose their own economic paths. And if economic freedom was the foundation of capitalism, then both cooperation and competition seemed to be the essential engines that made it run.
These thoughts, while compelling to me, were near heretical among many of my friends. While Texas was a conservative state with a proud business tradition, the countercultural types who made up my social circles and much of the Whole Foods Market team and customer base were politically progressive. Probably more than a few were closet socialists. Competition, to them, was about selfishness and greed and should be transcended in favor of love and cooperation.
Love and cooperation are beautiful qualities—of that I had no doubt. My most profound glimpses into the nature of the universe had shown me a dance of love and unity. I wanted the world I lived in, and the business I worked in, to reflect those ultimate spiritual truths. And I knew that unrestrained competition—without any rules or ethics constraining lying, cheating, and violence—could be highly destructive and would undermine the foundation of capitalism and our collective prosperity, as I wrote in my letter to Debbie. But I also saw the positive power of competition. And I saw the downsides to cooperation. Taken too far, it could lead to a kind of stultifying bureaucratic collectivism that made it hard to get anything done and would stifle creativity and innovation.
I thought of my experience at [communal living co-op] Prana House—I had loved the community, but discussing all community decisions exhaustively as a group was very time consuming. I didn’t ever want to imagine a larger society based around such impractical ideals. And so, around and around I went in my mind.
Competition, cooperation—both seemed necessary; both were powerfully creative; both had dangers if pushed too far. Of course, a synthesis of both in healthy forms was the answer to my puzzle, but I did not realize it back then.
This article Leadership masterclass: Fine-tune the “essential engines” of business is featured on Big Think.
Read more
Some 500 years ago, there was one scientific phenomenon that was, without controversy, extremely well-understood: the motion of the celestial objects in the sky. The Sun rose in the east and set in the west with a regular, 24 hour period. Its path in the sky rose higher and the days grew longer until the summer solstice, while its path was the lowest and shortest on the winter solstice. The stars exhibited that same 24 hour period, as though the heavenly canopy rotated throughout the night. The Moon migrated night-to-night relative to the other objects by about 12° as it changed its phases, while the planets wandered according to the geocentric rules of Ptolemy and, later, refinements put forth by others.
For over 1000 years, this Earth-centered view of our Universe went largely unchallenged, and became nearly universally accepted.
We often ask ourselves, “how was this possible?” How did this geocentric picture of the Universe hold up, without any of science’s greatest minds contesting it, for generation after generation for more than a millennium? There’s this common narrative that due to dogmatism, like the unchallengeable facts of Earth being stationary and the center of the Universe, no one was even allowed to question these so-called facts. But the truth is far more complex. The reason the geocentric model held sway for so long wasn’t because of the oft-ascribed problem of groupthink, but rather because the evidence supporting a geocentric Universe fit it so well: far better than any of the alternatives that had been put forth. The biggest enemy of progress isn’t groupthink at all, but the unrivaled successes of the leading, already-established theory. Today, although many complain about “groupthink” as a major problem in science, it’s actually the successes of our current picture of the Universe that present the greatest difficulties when searching for a scientific revolution. This chart, from around 1660, shows the signs of the zodiac and a model of the solar system with Earth at the center. For decades or even centuries after Kepler clearly demonstrated that not only is the heliocentric model valid, but that planets move in ellipses around the Sun, many refused to accept it, instead hearkening back to the ancient idea of Ptolemy and geocentrism.
Credit: Johannes Van Loon, Andreas Cellarius Harmonia Macrocosmica, 1660/61
Although we normally consider Copernicus, and his 16th century treatise, the beginnings of heliocentrism, that’s not exactly true. It might not be a particularly well known fact, but the idea of a heliocentric Universe goes back (at least) over 2000 years. All the way back in the 3rd century BCE, the legendary scientist Archimedes published a book called The Sand Reckoner, where he begins contemplating the Universe beyond Earth. Although he isn’t quite convinced by an argument against geocentrism, he recounts the (now lost) work of his contemporary, Aristarchus of Samos, who put forth the following idea:
“His hypotheses are that the fixed stars and the sun remain unmoved, that the earth revolves about the sun on the circumference of a circle, the sun lying in the middle of the orbit, and that the sphere of the fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.”
In other words, way back when the Great Wall of China was being built, Aristarchus proposed that the Sun and the stars were stationary, that the Earth revolves around the Sun, and that the stars are so distant that the Earth’s distance vs. the Sun’s size is about the same ratio as the distance to the star’s vs. the size of Earth’s orbit. (The latter ratio is actually about 580 greater than the former ratio.) The work of Aristarchus was recognized as having great importance: for two major reasons that, surprisingly, have nothing to do with the idea of heliocentrism. The observed path that the Sun takes through the sky can be tracked, from solstice to solstice, using a pinhole camera. That lowest path is the winter solstice, where the Sun reverses course from dropping lower to rising higher with respect to the horizon, while the highest path corresponds to the summer solstice.
Credit: Regina Valkenborgh
Why do the heavens appear to rotate? This was an enormous question of the time. When you look at the Sun, it appears to move through the sky in an arc each day, where the arc that’s observable from Earth is merely a fraction of a 360° circle: corresponding to an apparent motion for the Sun of about 15° each hour. The stars also move in precisely the same fashion: where the entire night sky seems to rotate about the Earth’s north or south pole (depending on your hemisphere) at that exact same rate, about 15° per hour. The planets and Moon do nearly the same thing, just with the tiny, extra addition of their nightly motion (~12° per night for the Moon; ~1° per night for a planet like Jupiter) relative to the background of stars.
The issue is that, based on these observations alone, there are two conceptual ways that are equally good at accounting for these observed motions.
The Earth is stationary, and the heavens (and everything in them) rotate about the Earth with a rotational period of 360° every 24 hours. Atop that, the Moon and planets have a slight, extra motion superimposed atop that: motion that could be accounted for by their additional motion through space.
The stars and other heavenly bodies are all stationary, while the Earth rotates about its axis, with a rotational period of 360° every 24 hours. And, in addition, the Moon and planets have a slight, extra motion superimposed atop that: motion that could be accounted for by their further motion through space.
If all we saw were the objects in the sky, either one of these explanations could fit the data perfectly well. Satellites, planes and comets transit across the night sky under stars that appear to rotate above Corfe Castle on August 12, 2016 in Corfe Castle, United Kingdom. The apparent motion of the objects in Earth’s sky could either be explained by the Earth rotating beneath our feet or by the heavens above rotating about a fixed Earth. Simply by watching the skies, we cannot tell these two explanations apart.
Credit: Dan Kitwood/Getty Images
And yet, practically everyone in the ancient, classical, and medieval world went with the first explanation and not the second.
Why? Was this a case of dogmatic groupthink?
Hardly. There were two major objections that were raised even back in the ancient world to the second scenario: the scenario of a rotating Earth. Neither one of these objections was successfully addressed until much more modern times: during the Renaissance.
The first objection is that if you dropped a ball on a rotating Earth, it shouldn’t fall straight down from the perspective of someone standing on the Earth. Instead, the ball should fall straight down while the person on the Earth moved along with the (rotating) Earth: motion that should appear different from the straight-line motion of the falling ball. This was an objection that persisted through the time of Galileo, and was only resolved with an understanding of relative motion and the independent evolution of horizontal and vertical components for projectile motion. Today, these properties form the basis of what’s known as Galilean relativity.
The second objection was even more severe, though. If the Earth rotated about its axis every 24 hours, then your position in space would differ by the diameter of Earth — about 12,700 km (7,900 miles) — from the start of the night to the end of the night. That difference in position should result in what we know astronomically as parallax: the shifting of closer objects relative to the more distant ones. The stars that are closest to Earth will appear to shift periodically with respect to the more distant stars as the Earth moves through space in orbit around the Sun. Before the heliocentric model was established, we weren’t looking for “shifts” with a ~300,000,000 kilometer baseline over the span of ~6 months, but rather a ~12,000 kilometer baseline over the span of one night: Earth’s diameter as it rotated on its axis. The distances to the stars are so great that it wasn’t until the 1830s that the first parallax, with a 300 million km baseline, was detected. Today, we’ve measured the parallax of over 1 billion stars with ESA’s Gaia mission.
Credit: ESA/ATG medialab
If the stars were actual objects in space at great distances from Earth, then the closest ones should appear to exhibit this parallax relative to the more distant ones: after sunset, the closest stars should appear to shift by a small but noticeable amounts relative to their positions just before sunrise. And yet, no matter how acute your vision was, nobody had ever observed a parallax for a single one of the thousands of stars visible in the sky. If they were at different distances and the Earth was rotating, we’d expect to see the closest ones shift position from the beginning of the night to the end of the night.
Despite this prediction, no parallax was ever observed for more than 1000 years: until the 1830s, in fact, well after the invention of the telescope. With no evidence for a rotating Earth right here on Earth’s surface, and no evidence for parallax (and hence, a rotating Earth) among the stars in the heavens, it became difficult to ascribe the apparent motions of the Sun, stars, and other heavenly objects by hypothesizing a rotating Earth. The data didn’t support that hypothesis, while the alternative explanation of a stationary Earth and a rotating sky — or a “celestial sphere” beyond Earth’s sky — wasn’t contradicted by any observables. For that reason, the “rotating celestial sphere” option emerged as the favored explanation. This Foucault pendulum, on display in action at the Ciudad de las Artes y de las Ciencias de Valencia in Spain, rotates substantially over the course of a day, knocking down various pegs (shown on the floor) as it swings and the Earth rotates. This demonstration, which makes the rotation of the Earth very clear, was only concocted in the 19th century.
Credit: Daniel Sancho/flickr
Were we wrong? In hindsight, we absolutely were.
The Earth does rotate, but we didn’t have the tools, knowledge, or the precision available to us in order to make quantitative predictions for what we’d expect to see. It turns out that the Earth does rotate, but the key experiment that allowed us to observe it on Earth, the Foucault pendulum, wasn’t developed until the 19th century. Similarly, the first parallax wasn’t seen until the 19th century either, owing to the fact that the distance to the stars is enormous, and it takes the Earth migrating by millions of kilometers over weeks and months, not thousands of kilometers over a few hours, for the best telescopes of the time to detect it. (The largest parallax for any star is for Alpha Centauri, whose maximum parallax is just 0.74 arc-seconds, or about 1/5000th of a degree.)
The problem was that we didn’t have the evidence at hand to tell these two predictions apart, and that we (incorrectly) conflated “absence of evidence” with “evidence of absence.” We couldn’t detect a parallax among the stars, which we expected for a rotating Earth, so we concluded that the Earth wasn’t rotating. We couldn’t detect an aberration in the motion of falling objects, so we concluded that the Earth wasn’t rotating. We must always keep in mind, in science, that the effect we’re looking for, that we haven’t seen yet, might actually be present: including at precisions that are just below the threshold of where we’re capable of measuring. 61 Cygni was the first star to have its parallax measured and published (back in 1838), but also is a difficult case due to its large proper motion. These two images, stacked in red and blue and taken almost exactly one year apart, show this binary star system’s fantastic speed. If you want to measure the parallax of an object to extreme accuracy, you’ll make your two ‘binocular’ measurements simultaneously, to avoid the effect of the star’s motion through the galaxy. Gaia is exceptionally good at characterizing the orbits of nearby stars with small separations from their companion, but faces more challenges with more distant, wider binary systems.
Credit: Lorenzo2/Astrofili forums
Still, Aristarchus was able to make important advances that did survive throughout the ages, and that were quite significant for his time. First off, he demonstrated that he wasn’t dogmatic about his own ideas, as he was able to set his heliocentric ideas aside: instead using light and geometry within a geocentric framework to concoct the first method for measuring the distances to the Sun and the Moon, and hence to also estimate their sizes. Although his values were way off — mostly due to “observing” a dubious effect now known to be beyond the limits of human vision — his methods were sound, and modern data can accurately leverage Aristarchus’s methods to calculate the distances to the Sun and the Moon, as well as the physical sizes of each one.
It wasn’t until the 16th century that there was revived interest in Aristarchus’s heliocentric ideas, as that’s when Nicolaus Copernicus came onto the scene. Copernicus noted that the most puzzling aspect of planetary motion, the periodic “retrograde” motion of the planets, could be equally well-explained from two perspectives.
Planets could orbit according to the geocentric model: where planets moved in a small circle that orbited along a large circle around the Earth, causing them to physically move “backwards” at occasional points in their orbit.
Or planets could orbit according to the heliocentric model: where every planet orbited the Sun in a circle, and when an inner (faster-moving) planet overtook an outer (slower-moving) one, the observed planet appeared to change direction temporarily.
One of the great puzzles of the 1500s was how planets moved in an apparently retrograde fashion. This could either be explained through Ptolemy’s geocentric model (left), or Copernicus’ heliocentric one (right). However, getting the details right to arbitrary precision was something neither one could do. It would not be until Kepler’s notion of heliocentric, elliptical orbits, and the subsequent mechanism of gravitation proposed by Newton, that heliocentrism would triumph by scientific standards.
Credit: E. Siegel/Beyond the Galaxy
Why do the planets appear to make retrograde paths? This, then, became the key question for astronomers and those who studied Earth’s place in the Universe. Now, humanity had two potential explanations with vastly different perspectives, yet both were capable of producing the phenomenon that was observed. On the one hand, we had the old, prevailing, geocentric model, which accurately and precisely explained what we saw. On the other hand, we had the new, upstart (or resurrected, depending on your perspective), heliocentric model, which could also explain what we saw. At least, they could both qualitatively explain what was observed. But in science, it’s the best quantitative explanation, the one that accounts for “how much” of an effect we see, that will win out.
Unfortunately, the geocentric predictions in the 16th century were more accurate — with fewer and smaller observational discrepancies — than the heliocentric model’s predictions. Copernicus could not sufficiently reproduce the motions of the planets with a heliocentric system even as well as the geocentric model could, no matter which parameters he assigned to the various circular orbits of the planets. In fact, to remedy this, Copernicus even attempted to add in epicycles to the heliocentric model, seeking to improve the orbital fits. Even with this ad hoc fix, his heliocentric model, although it generated a renewed interest in the problem, did not perform as well as the geocentric model in practice. Mars, like most planets, normally migrates very slowly across the sky in one predominant (known as prograde) direction. However, a little less than once a year, Mars will appear to slow down in its migration across the sky, stop, reverse directions, speed up and slow down, and then stop again, resuming its original motion. This retrograde (west-to-east) period stands in contrast to Mars’s normal prograde (east-to-west) motion, and presented a scientific challenge for centuries.
Credit: E. Siegel/Stellarium
It wouldn’t be until the 17th century that the heliocentric model finally gained support and overthrew the geocentric model in a legendary scientific revolution. But why did it take so long? The reason it took close to 2000 years isn’t because of groupthink or a lack of imagination, but rather it took so long was because of how successful the geocentric model was at describing what we observed, and how poorly the alternatives fared in comparison. The positions of the heavenly bodies could be modeled exquisitely using the geocentric model, in a way that the heliocentric model could not reproduce.
It was only with the 17th century work of Johannes Kepler — who tossed out the Copernican assumption (that he himself once adhered to) that planetary orbits must be reliant on circles — that led to the heliocentric model finally overtaking the geocentric one. What was most remarkable about Kepler’s achievement wasn’t:
that he used ellipses instead of circles,
that he overcame the dogma or groupthink of his day,
or that he actually put forth laws of planetary motion, rather than merely a model of it.
Instead, Kepler’s heliocentrism, with elliptical orbits, was so remarkable because, for the first time, an idea had come along that described the Universe, including the motion of the planets, better and more comprehensively than the previous (geocentric) model could. Tycho Brahe conducted some of the best observations of Mars prior to the invention of the telescope, and Kepler’s work largely leveraged that data. Here, Brahe’s observations of Mars’s orbit, particularly during retrograde episodes, provided an exquisite confirmation of Kepler’s elliptical orbit theory. Kepler put forth his 1st and 2nd laws of planetary motion in 1609, with his 3rd law coming 10 years later: in 1619.
Credit: Wayne Pafko
There are always three hallmarks of any scientific revolution, where a new theory comes along looking to supplant and replace the old one.
The new theory succeeds wherever the old theory did.
The new theory explains an observed phenomenon that the old theory couldn’t account for.
And the new theory, in comparison to the old theory, makes differing predictions that we can then go out and test.
In particular, the (highly eccentric) orbit of Mars, which was previously the biggest point of trouble for Ptolemy’s model, was an unequivocal success for Kepler’s ellipses. Under even the most stringent of conditions, where the geocentric model had its greatest departures from what was predicted, the heliocentric model had its greatest successes. That’s often the test case: look where the prevailing theory has the greatest difficulty, and try to find a new theory that not only succeeds where the prior one fails, but succeeds in every instance where the prior one also succeeds.
Kepler’s laws paved the way for Newton’s law of universal gravitation, and his rules apply equally well to the Earth-moon system, to Jupiter’s and Saturn’s moons within the Solar System, and to the motions of planets of exoplanetary systems here in the 21st century. One can complain about the fact that it took some ~1800 years from Aristarchus until heliocentrism finally superseded our earlier geocentric notions, but the truth is that it until Kepler and the advent of elliptical orbits, there was no heliocentric model that matched the data and observations as well as Ptolemy’s model did. The Muon g-2 electromagnet at Fermilab, ready to receive a beam of muon particles. This experiment began in 2017 and continues to take data, having reduced the uncertainties in the experimental values significantly. Theoretically, we can compute the expected value perturbatively, through summing Feynman diagrams, getting a value that disagrees with the experimental results. The non-perturbative calculations, via Lattice QCD, seem to agree, however, deepening the puzzle of the muon’s anomalous magnetic moment.
Credit: Reidar Hahn/Fermilab
In fact, it’s easy to envision a slightly different version of human history, where the geocentric model would hold sway for even longer periods of time. The only reason this scientific revolution occurred when it did is because there were already well-established “cracks” in the theory: places, such as for the orbit of Mars (and, to a lesser extent, Mercury) where observations and predictions failed to perfectly align. Whenever there’s a mismatch between what’s predicted and what’s measured, that’s where the opportunity for a new revolution may arise, but even that is not guaranteed. It leads to some fascinating questions that puzzle scientists even today.
Are dark matter and dark energy real, or is this an opportunity for a revolution?
Do the different measurements for the expansion rate of the Universe signal a problem with our techniques, or are they an early indication of potential new physics?
What do non-zero neutrino masses indicate; a simple mixing, as in the case of quarks, or the first step towards a leap beyond the Standard Model?
And what of the muon g-2 experiment? Is this a case where experiment differs from theory, or a case where we’ve simply made theoretical mistakes in our calculations?
It’s important to explore all possibilities, even the most wild ones, but to always ground ourselves in the reality of observations and measurements we can make. If we ever want to go beyond our current understanding, any alternative theory has to not only reproduce all of our present-day successes, but to succeed where our current theories cannot. That’s why scientists are often so resistant to new ideas: not because of groupthink, dogma, or inertia, but because most new ideas never clear even the first of those epic hurdles, and are inconsistent with the established data we already possess. Whenever the data clearly indicates that one theoretical alternative is superior to all the others, however, a scientific revolution is inevitably sure to follow.
This article Is fundamental science a victim of its own success? is featured on Big Think.
Read more
Sometimes the best thing in life is a simple rule that promises to solve a complex problem. Bonus points if the problem concerns human behavior, and the rule sports a catchy, easy-to-remember number.
Perhaps a few of these sound familiar?
The 10,000-hour rule. It takes 10,000 hours to master a skill.
The 21-day habit loop. It takes 21 days to form a new habit.
The Myers-Briggs Type Indicator. There are 16 personality types.
The 80-20 rule. 80% of results stem from 20% of causes.
The 50-40-10 rule. 50% of happiness is genetic; the other 50% comes from choices and life circumstances.
In each case, a modicum of research gave a scientific spit shine to an otherwise casual observation. Yes, it takes time and dedication to master a skill, but there’s nothing magical about the 10,000-hour mark. Yes, genetics play a role in happiness, but our choices and circumstances also alter our genes. And yes, people have different personalities, but there’s little valid and reliable data to suggest everyone will fit neatly into one of 16 boxes.
That brings us to the 7-38-55 rule. This one claims that only 7% of a conversation’s meaning is found in the words. The remaining 93% comes from the speaker’s tone of voice and body language (38% and 55%, respectively). It’s a neat and tidy formula, one that promises to let you cut through the verbal fluff and peer at what someone is actually telling you — or even hiding from you. The reality is, of course, much more complicated, but to understand why, we need to go back to the original research.
Formulating the 7-38-55 rule
The 7-38-55 ratio was coined by psychologist Albert Mehrabian, who, in the late 1960s, performed two studies that would serve as its foundation.
In his first study, Mehrabian wanted to determine if listeners detected emotional cadence more through words or intonation. He asked 30 women participants to listen to words spoken in different tones of voice (positive, neutral, or negative). Sometimes the words and tones matched up, such as saying “thanks” in a positive voice. Sometimes they were incongruous, such as saying “thanks” in a negative voice. He found that participants were better at detecting the emotional cadence in the intonation.
His second study was similar except this time 37 women participants were given a photograph of a person’s face with different expressions (like, neutral, or dislike). The participants would hear a word spoken aloud while looking at a photo, and they had to determine the emotion. Again, sometimes the intonations and facial expressions matched up, other times not. This time, Mehrabian found the participants were better at detecting the emotional cadence in facial expressions.
Combining these results, Mehrabian devised the 7-38-55 ratio as a shorthand for the different values participants placed on verbal and nonverbal emotional cues. He later referenced the ratio in his book Silent Messages (1971), and it’s through that book that the rule connected with a popular audience before taking on a life of its own.
Lost in the popular translation
As you may have noticed, Mehrabian’s research has its shortcomings. It focused on an artificial situation conducted in a laboratory with small sample sizes of only women participants. The studies never considered body language other than facial expressions. No follow-up studies looked at the 7-38-55 ratio in other environments or with a different cohort. And, perhaps most critically for a rule touted as the key to unlocking a communication superpower, no actual conversations took place.
“Mehrabian’s research has been widely misinterpreted, and because of its limitations, any broad-based conclusions about the nature of communication simply cannot be derived from it,” David Lapakko, a communications professor at Augsburg University, writes in his review of the 7-38-55 rule.
None of which is to say that Mehrabian’s research is wrong or useless. It neatly demonstrates how sensitive we are to the feelings of others — especially when those emotions contradict what the person is saying. Such communication is critical to building and maintaining relationships. It also allows for language’s more playful qualities, such as comedy and sarcasm, to emerge.
However, nothing in Mehrabian’s research suggests the 7-38-55 ratio can be applied to communication as a whole. Unfortunately, popular interpretations of his research have largely learned the wrong lessons. It doesn’t matter what you say, only how you say it. Don’t listen to what people say, listen to the hidden message in their cough or crossed arms. All misleading lessons at best.
Even Mehrabian has argued that his research has been widely misrepresented and tried to set the record straight. As he wrote on his website: “Please note that this and other equations regarding relative importance of verbal and nonverbal messages were derived from experiments dealing with communications of feelings and attitudes […]. Unless a communicator is talking about their feelings or attitudes, these equations are not applicable.” [Emphasis ours.]
Just think about it
Unfortunately for Mehrabian, the 7-38-55 rule has grown into an intellectual urban legend. And just like the other “rules” listed above, with each retelling in self-help books, business articles, and keynote addresses, it gained further credence and lodged itself deeper in our popular imaginations.
This would normally mean that any attempt to debunk the 7-38-55 rule would fall prey to Brandolini’s law — which states that the amount of energy necessary to refute BS is an order of magnitude larger than the energy needed to produce it. But in this case, a simple thought experiment will do.
Imagine that you’re listening to a lecture. It can be on any subject you’d like: music, philosophy, space travel, the political aftermath of Attila the Hun’s invasion of Europe. Listener’s choice. Now imagine that same lecture delivered through a series of coos, grunts, gestures, and winks. Would you say you grasped 93% of the nuances concerning Roman and Hun diplomacy circa 450 AD? Probably not.
In a similar vein, if words were really only 7% of communication, then why would anybody need to learn a foreign language? You should be able to navigate any foreign culture handily with nods, meek smiles, and the occasional chest bump. But as anyone who has traveled internationally can tell you, a good translation dictionary clues you into more than 7% of a person’s meaning. It’s vital without an intimate grasp of a language, including its words.
The 7-38-55 rule is the conversational equivalent of X-ray glasses. It promises to allow you to see through any conversation, but those promises are flimsy on closer inspection. (Credit: ORAU)
Rules of affective engagement
All of which leads to a simple, if slightly unsatisfying, takeaway: Verbal and nonverbal aspects of communication are important, and trying to quantify the relative importance of such qualitative experiences is kind of silly. It makes for an eye-catching headline, sure. But the ascribed numbers will often say more about a researcher’s methodology than any real-time chat you have with another person.
In fact, while we’ve been discussing communication as a three-way split between words, tone, and body language, the truth runs deeper. According to psychiatrist Jeff Thompson, to be better conversationalists, we need to pay attention to much more than the 7-38-55 ratio.
For instance, all conversations take place in a context. This includes things like the environment, the roles of the speakers, as well as their history and relationships — all of which provide vital information for any communication.
Communication cues also come to us in clusters. Communication gurus often sell the idea that reading simple cues can let you see through a person’s deceptive words to get at the truth. If a person crosses their arms, they are hiding something. If they don’t make eye contact, they are lying. Basically conversational X-ray specs (just remember to include return postage with your mail-in coupon).
But is that person crossing their arms because they are hiding something or is it cold in the room? Are they not making eye contact because they are lying or simply shy or distracted? Because cues come in clusters, we can’t give outsized importance to a single one. We need to consider the tone, facial expressions, and other body language signals — and yes, the words spoken — of any conversation holistically.
And then there’s convergence: Do the words match the nonverbal signals we’re picking up? This one gets to the original purpose of the 7-38-55 theory, but even here, it’s just one of the many cues we need to pick up on to effectively communicate every day.
“[W]hen trying to understand others, a single gesture or comment does not necessarily mean something. Instead, these theories allow us to take note and observe more to get a better understanding of what is going on,” Thompson writes.
It’s nice to think that science and psychology can offer us precise formulas for solving life’s complex problems, but the truth is that such rules are false comforts. It takes hard work to master a skill, form a habit, and be happy in life. Similarly, it takes curiosity, empathy, perceptivity, and emotional intelligence to converse with others and create meaningful connections.
There’s no easy number to manage that, but maybe there’s also comfort to be found in knowing that you aren’t beholden to some prescribed formula. You can make your conversations your own.
This article The 7-38-55 rule: Debunking the golden ratio of conversation is featured on Big Think.
Read more