Клубове Дир.бг
powered by diri.bg
търси в Клубове diri.bg Разширено търсене

Вход
Име
Парола

Клубове
Dir.bg
Взаимопомощ
Горещи теми
Компютри и Интернет
Контакти
Култура и изкуство
Мнения
Наука
Политика, Свят
Спорт
Техника
Градове
Религия и мистика
Фен клубове
Хоби, Развлечения
Общества
Я, архивите са живи
Клубове Дирене Регистрация Кой е тук Въпроси Списък Купувам / Продавам 23:52 27.04.24 
Непрофесионални
   >> Вегетарианство
Всички теми Следваща тема *Кратък преглед

Страници по тази тема: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | (покажи всички)
Тема Complexity Economics Shows Us Why Laissez-Faire...нови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано11.04.17 02:40




Markets are a type of ecosystem that is complex, adaptive, and subject to the same evolutionary forces as nature.

By Eric Liu and Nick Hanauer

During 2007 and 2008, giant financial institutions were obliterated, the net worth of most Americans collapsed, and most of the world’s economies were brought to their knees.

At the same time, this has been an era of radical economic inequality, at levels not seen since 1929. Over the last three decades, an unprecedented consolidation and concentration of earning power and wealth has made the top 1 percent of Americans immensely richer while middleclass Americans have been increasingly impoverished.

To most Americans and certainly most economists and policymakers, these two phenomena seem unrelated. In fact, traditional economic theory and contemporary American economic policy does not seem to admit the possibility that they are connected in any way.

And yet they are—deeply. We aim to show that a modern understanding of economies as complex, adaptive, interconnected systems forces us to conclude that radical inequality and radical economic dislocation are causally linked: one brings and amplifies the other.

If we want a high-growth society with broadly shared prosperity, and if we want to avoid dislocations like the one we have just gone through, we need to change our theory of action foundationally. We need to stop thinking about the economy as a perfect, self-correcting machine and start thinking of it as a garden.

Traditional economic theory is rooted in a 19th- and 20th-century understanding of science and mathematics. At the simplest level, traditional theory assumes economies are linear systems filled with rational actors who seek to optimize their situation. Outputs reflect a sum of inputs, the system is closed, and if big change comes it comes as an external shock. The system’s default state is equilibrium. The prevailing metaphor is a machine.

But this is not how economies are. It never has been. As anyone can see and feel today, economies behave in ways that are non-linear and irrational, and often violently so. These often-violent changes are not external shocks but emergent properties—the inevitable result—of the way economies behave.

The traditional approach, in short, completely misunderstands human behavior and natural economic forces. The problem is that the traditional model is not an academic curiosity; it is the basis for an ideological story about the economy and government’s role—and that story has fueled policymaking and morphed into a selfishness-justifying conventional wisdom.

Even today, the debate between free marketeers and Keynesians unfolds on the terms of the market fundamentalists: government stimulus efforts are usually justified as a way to restore equilibrium, and defended as regrettable deviations from government’s naturally minimalist role.

Fortunately, as we’ve described above, it is now possible to understand and describe economic systems as complex systems like gardens. And it is now reasonable to assert that economic systems are not merely similar to ecosystems; they are ecosystems, driven by the same types of evolutionary forces as ecosystems. Eric Beinhocker’s The Origin of Wealth is the most lucid survey available of this new complexity economics.

The story Beinhocker tells is simple, and not unlike the story Darwin tells. In an economy, as in any ecosystem, innovation is the result of evolutionary and competitive pressures. Within any given competitive environment—or what’s called a “fitness landscape”—individuals and groups cooperate to compete, to find solutions to problems and strategies for cooperation spread and multiply. Throughout, minor initial advantages get amplified and locked in— as do disadvantages. Whether you are predator or prey, spore or seed, the opportunity to thrive compounds and then concentrates. It bunches. It never stays evenly spread.

Like a garden, the economy consists of an environment and interdependent elements—sun, soil, seed, water. But far more than a garden, the economy also contains the expectations and interpretations all the agents have about what all the other agents want and expect. And that invisible web of human expectations becomes, in an everamplifying spiral, both cause and effect of external circumstances. Thus the housing-led financial crisis. Complexity scientists describe it in terms of “feedback loops.” Financier George Soros has described it as “reflexivity.” What I think you think about what I want creates storms of behavior that change what is.

Traditional economics holds that the economy is an equilibrium system; that things tend, over time, to even out and return to “normal.” Complexity economics shows that the economy, like a garden, is never in perfect balance or stasis and is always both growing and shrinking. And like an untended garden, an economy left entirely to itself tends toward unhealthy imbalances. This is a very different starting point, and it leads to very different conclusions about what the government should do about the economy.

Einstein said, “Make everything as simple as possible, but not too simple.” The problem with traditional economics is that it has made things too simple and then compounded the error by treating the oversimplification as gospel. The bedrock assumption of traditional economic theory and conventional economic wisdom is that markets are perfectly efficient and therefore self-correcting. This “efficient market hypothesis,” born of the machineage obsession with the physics of perfect mechanisms, is hard to square with intuition and reality—harder for laypeople than for economic experts. And yet, like a dead hand on the wheel, the efficient market hypothesis still drives everything in economic policymaking.

Consider that if markets are perfectly efficient then it must be true that:

–The market is always right.

–Markets distribute goods, services, and benefits rationally and efficiently.

–Market outcomes are inherently moral because they perfectly reflect talent and merit and so the rich deserve to be rich and the poor deserve to be poor.

–Any attempt to control market outcomes is inefficient and thus immoral.

–Any non-market activity is inherently suboptimal.

–If you can make money doing something not illegal, you should do it.

–As long as there is a willing buyer and seller, every transaction is moral.

–Any government solution, absent a total market failure, is a bad solution.

But, of course, markets properly understood are not actually efficient. So-called balances between supply and demand, while representing a fair approximation, do not in fact really exist. And because humans are not rational, calculating, and selfish, their behavior in market settings is inherently imperfect, unpredictable, and inefficient. Laypeople know this far better than experts.

Markets are a type of ecosystem that is complex, adaptive, and subject to the same evolutionary forces as nature. As in nature, evolution makes markets an unparalleled way of effectively solving human problems. But evolution is purpose-agnostic. If the market is oriented toward producing junk and calling it good GDP, market evolution will produce ever more marketable junk. As complex adaptive systems, markets are not like machines at all but like gardens. This means, then, that the following must be true:

–The market is often wrong.

–Markets distribute goods, services, and benefits in ways that often are irrational, semi-blind, and overdependent on chance.

–Market outcomes are not necessarily moral—and are sometimes immoral—because they reflect a dynamic blend of earned merit and the very unearned compounding of early advantage or disadvantage.

–If well-tended, markets produce great results but if untended, they destroy themselves.

–Markets, like gardens, require constant seeding, feeding, and weeding by government and citizens.

–More, they require judgments about what kind of growth is beneficial. Just because dandelions, like hedge funds, grow easily and quickly, doesn’t mean we should let them take over. Just because you can make money doing something doesn’t mean it is good for the society.

–In a democracy we have not only the ability but also the essential obligation to shape markets—through moral choices and government action—to create outcomes good for our communities.

You might think that this shift in metaphors and models is merely academic. Consider the following. In 2010, after the worst of the financial crisis had subsided but still soon enough for recollections to be vivid and honest, a group of Western central bankers and economists got together to assess what went wrong. To one participant in the meeting, who was not a banker but had studied the nature of economies in great depth, one thing became strikingly, shockingly clear. Governments had failed to anticipate the scope and speed of the meltdown because their model of the economy was fantastically detached from reality.

For instance, the standard model used by many central banks and treasuries, called a dynamic stochastic general equilibrium model, did not include banks. Why? Because in a perfectly efficient market, banks are mere pass-throughs, invisibly shuffling money around. How many consumers did this model take into account in its assumptions about the economy? Millions? Hundreds of thousands? No, just one. One perfectly average or “representative” consumer operating perfectly rationally in the marketplace. Facing a crisis precipitated by the contagion of homeowner exuberance,fueled by the pathological recklessness of bond traders and bankers, abetted by inattentive government watchdogs, and leading to the deepest recession since the Great Depression, the Fed and other Western central banks found themselves fighting a crisis their models said could not happen.

This is an indictment not only of central bankers and the economics profession; nor merely of the Republicans whose doctrine abetted such intellectual malpractice; it is also an indictment of the Democrats who, bearing responsibility for making government work, allowed such a dreamland view of the world to drive government action in the national economy. They did so because over the course of 20 years they too had become believers in the efficient market hypothesis. Where housing and banking were concerned, there arose a faith-based economy: faith in rational individuals, faith in ever-rising housing values, and faith that you would not be the one left standing when the music stopped.

We are not, to be emphatically clear, anti-market. In fact, we are avid capitalists. Markets have an overwhelming benefit to human societies, and that is their unmatched ability to solve human problems. A modern understanding of economies sees them as complex adaptive systems subject to evolutionary forces. Those forces enable competition for the ability to survive and succeed as a consequence of the degree to which problems for customers are solved. Understood thus, wealth in a society is simply the sum of the problems it has managed to solve for its citizens. Eric Beinhocker calls this “information.” As Beinhocker notes, less developed “poor” societies have very few solutions available. Limited housing solutions. Limited medical solutions. Limited nutrition and recreation solutions. Limited information. Contrast this with a modern Western superstore with hundreds of thousands of SKUs, each representing a unique solution to a unique problem.

But markets are agnostic to what kind of problems they solve and for whom. Whether a market produces more solutions for human medical challenges or more solutions for human warfare—or whether it invents problems like bad breath for which more solutions are needed—is wholly a consequence of the construction of that market, and that construction will always be human made, either by accident or by design. Markets are meant to be servants, not masters.

As we write, the Chinese government is making massive, determined, strategic investments in their renewable energy industry. They’ve decided that it’s better for the world’s largest population and second-largest economy to be green than not—and they are shaping the market with that goal in mind. By doing so they both reduce global warming and secure economic advantage in the future. We are captive, meanwhile, to a market fundamentalism that calls into question the right of government to act at all—thus ceding strategic advantage to our most serious global rival and putting America in a position to be poorer, weaker, and dirtier down the road. Even if there hadn’t been a housing collapse, the fact that our innovative energies were going into building homes we didn’t need and then securitizing the mortgages for those homes says we are way off track.

Now, it might be noted that for decades, through administrations of both parties, our nation did have a massive strategic goal of promoting homeownership—and that what we got for all that goal-setting was a housing-led economic collapse. But setting a goal doesn’t mean then going to sleep; it requires constant, vigilant involvement to see whether the goal is the right goal and whether the means of reaching the goal come at too great a cost. Homeownership is a sound goal. That doesn’t mean homeownership by any means necessary is a sound policy. Pushing people into mortgages they couldn’t truly afford and then opening a casino with those mortgages as the chips was not the only way to increase homeownership. What government failed to do during the housing boom was to garden—to weed out the speculative, the predatory, the fraudulent.

Conventional wisdom says that government shouldn’t try to pick winners in the marketplace, and that such efforts are doomed to failure. Picking winners may be a fool’s errand, but choosing the game we play is a strategic imperative. Gardeners don’t make plants grow but they do create conditions where plants can thrive and they do make judgments about what should and shouldn’t be in the garden. These concentration decisions, to invest in alternative energy or not, to invest in biosciences or not, to invest in computational and network infrastructure or not, are essential choices a nation must make.

This is not picking winners; it’s picking games. Public sector leaders, with the counsel and cooperation of private sector experts, can and must choose a game to invest in and then let the evolutionary pressures of market competition determine who wins within that game. DARPA (the Defense Advanced Research Projects Agency), NIST (the National Institute of Standards and Technology), NIH (National Institutes of Health), and other effective government entities pick games. They issue grand challenges. They catalyze the formation of markets, and use public capital to leverage private capital. To refuse to make such game-level choices is to refuse to have a strategy, and is as dangerous in economic life as it would be in military operations. A nation can’t “drift” to leadership. A strong public hand is needed to point the market’s hidden hand in a particular direction.

Markets as Machines vs. Markets as Gardens

Understanding economics in this new way can revolutionize our approach and our politics. The shift from mechanistic models to complex ecological ones is not one of degree but of kind. It is the shift from a tradition that prizes fixity and predictability to a mindset that is premised on evolution. Compare two frames in capsule form:

Machine view: Markets are efficient, thus sacrosanct
Garden view: Markets are effective, if well tended

In the traditional view, markets are sacred because they are said to be the most efficient allocators of resources and wealth. Complexity science shows that markets are often quite inefficient—and that there is nothing sacred about today’s man-made economic arrangements. But complexity science also shows that markets are the most effective force for producing innovation, the source of all wealth creation. The question, then, is how to deploy that force to benefit the greatest number.

Machine view: Regulation destroys markets
Garden view: Markets need fertilizing and weeding, or else are destroyed

Traditionalists say any government interference distorts the “natural” and efficient allocation that markets want to achieve. Complexity economists show that markets, like gardens, get overrun by weeds or exhaust their nutrients (education, infrastructure, etc.) if left alone, and then die—and that the only way for markets to deliver broadbased wealth is for government to tend them: enforcing rules that curb anti-social behavior, promote pro-social behavior, and thus keep markets functioning.

Machine view: Income inequality reflects unequal effort and ability
Garden view: Inequality is what markets naturally create and compound, and requires correction

Traditionalists assert, in essence, that income inequality is the result of the rich being smarter and harder working than the poor. This justifies government neglect in theface of inequality. The markets-as-garden view would not deny that smarts and diligence are unequally distributed. But in their view, income inequality has much more to do with the inexorable nature of complex adaptive systems like markets to result in self-reinforcing concentrations of advantage and disadvantage. This necessitates government action to counter the unfairness and counterproductive effects of concentration.

Machine view: Wealth is created through competition and by the pursuit of narrow self-interest
Garden view: Wealth is created through trust and cooperation
Where traditionalists put individual selfishness on a moral pedestal, complexity economists show that norms of unchecked selfishness kill the one thing that determines whether a society can generate (let alone fairly allocate) wealth and opportunity: trust. Trust creates cooperation, and cooperation is what creates win-win outcomes. Hightrust networks thrive; low-trust ones fail. And when greed and self-interest are glorified above all, high-trust networks become low-trust. See: Afghanistan.

Machine view: Wealth = individuals accumulating money
Garden view: Wealth = society creating solutions

One of the simple and damning limitations of traditional economics is that it can’t really explain how wealth gets generated. It simply assumes wealth. And it treats money as the sole measure of wealth. Complexity economics, by contrast, says that wealth is solutions: knowledge applied to solve problems. Wealth is created when new ideas— inventing a wheel, say, or curing cancer—emerge from a competitive, evolutionary environment. In the same way, the greatness of a garden comes not just in the sheer volume but also in the diversity and usefulness of the plants it contains.

In other words, money accumulation by the rich is not the same as wealth creation by a society. If we are serious about creating wealth, our focus should not be on taking care of the rich so that their money trickles down; it should be on making sure everyone has a fair chance—in education, health, social capital, access to financial capital— to create new information and ideas. Innovation arises from a fertile environment that allows individual genius to bloom and that amplifies individual genius, through cooperation, to benefit society. Extreme concentration of wealth without modern precedent that has undermined equality of opportunity and thus limited our overall economic potential.



Тема + 39 Commentsнови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано11.04.17 02:41



sageflower • a year ago
I've read the book. It is clear and easy to read. The logic needs no enhancement. We all need to tend this garden or it will go to ruin. it is already well on its way there.
5 • Reply•Share ›
Avatar
Je' Czaja • a year ago
Yup. The old metaphor of mechanical laws is out of date. They were so enamored of machines, they applied the model even to human beings and ground them up for fuel-they were just resources, like standing timber-human resources.
5 • Reply•Share ›
Avatar
DWAnderson • a year ago
This piece suffers from two serious problems, bit has one accurate insight.

First problem: Almost every description of current economic thinking is a straw man. No one believes that markets are infallible, the equilibrium states really exist, etc. They are all simplifications designed to highlight certain aspects of behavior. Almost everyone understands that models are useful for understanding aspects of how the world works, not for predicting specific states of the world.

Second problem: A pollyanaish view of how government operates. It does operate by the fiat of the omniscient and omnibenevolent. It has a very primitive feedback mechanism (voting), huge agency problems (bureaucracy), being subject to special interest capture, etc. That being said in the United States, it benefits from well meaning actors at many levels, but very often that is not sufficient to overcome its very serious problems and limitations.

The key insight is that the economy is like an ecosystem. True! When thinking about what the government ought to do, we should be thinking with more modesty about what the government can do to improve the operation of markets. IMHO this means trying to get the big things right (e.g. implementing a simple consumption tax or a carbon tax, intellectual property rights, etc.) while letting the actors in the market actors determine (imperfectly) where to make investments in industry. Investments in public goods are a closer call, but I would be modest about the likelihood that the government will get these decisions right and think we would be better off trying to see if there was some way to let market participants make these decisions, e.g. making it easier to build toll roads.
8 • Reply•Share ›
Avatar
davidcayjohnston DWAnderson • a year ago
Actually only the minority of people who seriously studied economics understand your first, and accurate, point.

Sadly many Americans have no idea what equilibrium means, conflate the ideal barrierless "free" market with competitive markets and thus fail to see the damage from oligopolies, duopolies, stealth subsidies, etc.
9 • Reply•Share ›
Avatar
jayrayspicer davidcayjohnston • a year ago
It's not a straw man argument if many people actually believe it, even if you don't. If you have a different argument, then the counterargument to the "straw man" may not apply to your argument, but it's still not a straw man; it’s just not addressing *your* argument. Perhaps you’re not their target audience.

Ah, but it's also not a straw man argument if it accurately states what you, in fact, believe, even if you agree with some orthodoxy more carefully than the hoi polloi.
Liu and Hanauer say, "Traditional economic theory is rooted in a 19th- and 20th-century understanding of science and mathematics. At the simplest level, traditional theory assumes economies are linear systems filled with rational actors who seek to optimize their situation. Outputs reflect a sum of inputs, the system is closed, and if big change comes it comes as an external shock. The system’s default state is equilibrium."

DWAnderson says, "No one believes that markets are infallible, the equilibrium states really exist, etc. They are all simplifications designed to highlight certain aspects of behavior. Almost everyone understands that models are useful for understanding aspects of how the world works, not for predicting specific states of the world."

First, as davidcayjohnston points out, many, many people, including many who directly influence government policy, literally believe the things that DWAnderson says no one or almost no one believes. They certainly say as much on the campaign trail and on the floor of the House and Senate.
Second, how exactly do the above two quotes differ in their interpretation of neoclassical/neoliberal orthodoxy? Liu and Hanauer assert that a simple interpretation of traditional theory *assumes* linear systems, rational actors seeking optimization, etc. DWAnderson says that these ideas are simplifications for the purposes of modeling, which is what Milton Friedman asserted. Where's the disagreement? Where is the alleged straw man? Liu and Hanauer seem to have accurately described DWAnderson’s explicitly stated belief.

The fact is that neoclassical/neoliberal orthodoxy rests entirely on simplifying axioms that make the work of an orthodox economist easier, but have no foundation in empirical reality. Economic actors are rarely rational and almost never even try to optimize their decisions outside of very narrow data considerations and time constraints. I have yet to encounter a cost/benefit analysis in the business world that really attempts a comprehensive and quantitative approach to external or intangible costs and benefits, even though this is actually possible with Bayesian methods. I’m sure it’s done occasionally, but who has time for that? Besides, data unavailability, near-incomprehensible multivariate interactions, and near-infinite opportunity costs render rational maximization mathematically intractable. Good thing nobody ever really tries it.

And while DWAnderson seems to be of the opinion that government and democracy are just too damn risky to trust with anything important, it’s painfully clear that markets can’t be trusted to referee themselves. The government referees the game. The people referee the government. If that’s not working, fix it. Don’t throw up your hands and put your faith in markets. Markets don’t care what we want. Markets set prices; they don’t have values. Democratic government allows the people to set values-based boundaries around markets and to encourage markets in the most beneficial and least harmful directions. Markets are not capable of making these decisions. Markets determine who wins and loses the games. They don’t decide what kind of league to set up. Society is OK with baseball. We’re not OK with gladiators hacking each others’ limbs off. There’s probably a market for that, though.

Markets are natural selection at work. Natural selection is blind; it has no goals with which to guide evolution. Humans are increasingly capable of guiding their own evolution. We care about the future of our species in a way that our genes can’t. We are also capable of guiding markets. And we should, because we care about market outcomes in a way that markets can’t. It’s one of those erroneous simplifying assumptions to hope that all the individual market decisions will add up to the aggregate will of the people. We know that’s empirically false. Without government guidance, all the individual market decisions add up to the aggregate will of the oligarchy.
see more
• Reply•Share ›
Avatar
Jorge Icabalceta jayrayspicer • a year ago
Your statement, sir, is full of contradictions as big as the contradictions in the original article by Eric Liu and Nick Hanauer. You, as the authors, talk about how economists (All economists?) think and what they believe. That is a huge mistake. You sir, do not know me and you do not know a bunch of economists. So, how can you talk in my behalf and in behalf of many economists you have not taken the time to ask?

Second, you and the authors keep talking about “markets” as if markets are some sort of devices. Well, let me enlighten you. Markets are people. I do not need to compare markets to machines or gardens. Why don’t we just state the fact that markets are people? It is just
pure nonsense to use other terms when you have the right one in front of you.

Third, I am tired of making nitwits understand that economic theory is just that, theory. We use economic theory as a vehicle to explore issues and to try to get a sense of what the hell is going on. But, in my 12 years of economic education, no instructor ever told me that theories are sacred. Besides, the very fact that economics is always in evolution is because we question economic theory and develop it. Nobody ever
told me all the bs you said about markets being perfect. In no book of
economics I have ever read that. All this constant attack against a theory by you and people like you is a distraction from the real issues.

What are the real issues? Well let me put them succinctly to you: Creation and distribution of wealth. The authors say economists act as if we do not know how wealth is created. That statement just shows how little the authors know economists. How about the distribution of
wealth? You say that economists do not care about that. Nothing is farther from the truth. We do care and we advise, we point out, we propose. But, the people in power do not listen. I dare you to go to onepercenters and try to convince them about how unfair their behavior is. I dare anyone to do that. The answer you will get is that they deserve that wealth. It is not economists or economic theory, it is people with power.

So, while you keep blaming anything or anyone for whatever is happening without taking into considerations humans and their
values, you are just wasting your time and other people’s time.
see more
3 • Reply•Share ›
Avatar
jayrayspicer Jorge Icabalceta • a year ago
Jorge Icabalceta, I’m not sure why you’re attacking me or Liu and Hanauer. You seem to be complaining about exactly what we are complaining about. I totally agree that economics that don’t take into account humans and their values is useless or dangerous. Which is exactly the problem with neoclassical/neoliberal theory. The prevailing economic theory tells the one percenters that they’re not doing anything wrong when they so clearly are. And I never said anything about “all” economists. I singled out neoclassical/neoliberal economists. I don’t have to ask all economists in order to knock down the gibberish of the ones I happen to be talking about.

I also never said anything about markets being devices. Of course markets involve people making economic decisions. Liu and Hanauer certainly used a couple of analogies to make their points, but that’s fairly standard practice in debate, in order to illustrate a point. Don’t like the analogies? Point out their flaws, but you’re unlikely to get anybody to stop using them.

You should know that the word “theory” means "explanatory framework" in an academic setting. Dismissing something as "just a theory" doesn't really make a lot of sense, except in the vernacular. Unfortunately, the neoclassical/neoliberal theory is an explanatory framework founded on nonsense and useful to people who like oligarchy. Theories aren’t sacred, but they can be more or less useful in explaining the world and being productive of research and models that help people understand the world. Neoclassical/neoliberal theory utterly fails as a productive framework. It seems to explain human behavior, but when you look closely, the humans look more like robots. Consequently, the theory is incapable of making accurate predictions about the real world.

Neoclassical/neoliberal economics doesn’t say that markets are perfect or that people are rational maximizers (to take two examples). Instead, the theory states that in aggregate, for purposes of modeling, you can assume that they are. This bit of legerdemain is nonsense. It might make sense if they had any empirical research showing the shape of the distribution curves around “perfect” and “rational” and “maximization”, but they don’t, so the baseline of every model is built on a foundation of sand.

Until we have a proper theory, it's hard to even see the real issues. Economists may engage in impressive mathematics, but with bogus axioms, it’s no better than guesswork. Garbage in, garbage out. And since neoclassical/neoliberal gibberish is the prevailing economic theory, people in power actually do use it as their roadmap to policy. Countless influential people actually believe it and use it to justify destructive policies. They may not be listening to you, but they’re certainly listening to economists, very closely and very disastrously, because they’ve bet on the wrong theoretical horse. When the prevailing theory is doing this much damage, we have to keep hammering at its foundations to dismantle it.
see more
1 • Reply•Share ›
Avatar
ChrisTavareIsMyIdol jayrayspicer • 6 months ago
Then you haven't studied economics.
• Reply•Share ›
Avatar
John M Legge Jorge Icabalceta • 9 months ago
Your descent into personal abuse reveals the shallowness of your understanding of both economics and reality.
• Reply•Share ›
Avatar
Rjschundlr Jorge Icabalceta • 10 months ago
Economis Theories are valid as long as you know what are the values at play, Basic theory is that people value some things/time/recognition/etc. more than other things, as well as knowing that at a certain point the more you have the less you value more of it .... Etc. etc. one just has to know (which is differcult at times) what are the values being traded. As for the 1% or for that matter the .001% ers, you have no concept of their values and what they are dealing with ..... Part of the problem is envy! There is nothing unfair about trying to meet the needs of the people while making a profit. An actor gets paid for acting, some get paid more some less, some are in the top 1% some or in the bottom 1%, but there is noting unfair about that. ..... IF it was not for Steven Jobs, we would not have the Apple products, he had to learn how to be the CEO he became, and as a result he became very very rich, but his wealth benifited many other people, and the products he sold benifited many other people .... It was a win/win all the way ..... What is unfair, it not the conduct of most of the top 10 percent, it is the conduct of union leaders like the NEA that actively keep the POOR poor, instead of letting children go to the best schools for the children, the NEA work to keep children in schools that do not educate ..... This in not economics at play, it is raw power, as some would say PEOPLE POWER used against the POOR!
• Reply•Share ›
Avatar
ChrisTavareIsMyIdol jayrayspicer • 6 months ago
Laughable response
• Reply•Share ›
Avatar
Rjschundlr jayrayspicer • 10 months ago
"Economic actors are rarely rational and almost never even try to optimize their decisions outside of very narrow data considerations and time constraints" .... That does not mean that they are acting irrational, it terms of their own values, and the time constraints they have to make choice .... The fact that their choices may differ from yours seems to be part to the issue you have with Adam Smith. Government almost never look our for the common man, which is why our founders tried to limit Government. Most of the time government trys to guide the economy towards a oligarchy.... Not aways for it. All the while taking funds from the producers and transferring them to employees of the state.Hense today, government employees get better pay, better benifits, at less risk, than non-government employees.
• Reply•Share ›
Avatar
davidcayjohnston jayrayspicer • a year ago
I think you meant to reply to DWAnderson, not to me...He made the straw man argument.
• Reply•Share ›
Avatar
Henry Leveson-Gower DWAnderson • a year ago
I cannot see how models can be useful for anything if they fundamentally misrepresent the world they seek to describe. This defence of current economic models seems meaningless.
1 • Reply•Share ›
Avatar
Garrett Watson Henry Leveson-Gower • a year ago
"All models are wrong, but some are useful." - G.P. Box
2 • Reply•Share ›
Avatar
John M Legge Garrett Watson • 9 months ago
A model that is built on false assumptions (as distinct from simplifying from reality) or incorporates a contradiction can be used to "prove" anything.
1 • Reply•Share ›
Avatar
Rjschundlr DWAnderson • 10 months ago
A good review of the situation, the government more often than not causes the problems that people blame the free market for. The Great Depression should have been a simple correction, but the Government both under Hoover, and moreso under FDR caused the Great Depresseion and our recovery was due to Truman not WW2. The same is true for the Great Recession of 2007-2016 which was caused mostly by poor Federal Policies, and then expended due to Obama's policies. We are in a international world of trade, our tax polices need to be reformed to adjust to new facts. The Free Market is best in the long run, which does not mean there are no Government, it just mean that we need better educated people in Government. Employment Base taxes promote the conservation of Labor (unemployment) which is foolish. There should be no taxes on employment until we have over employment. To replace the lost revenue from employment base taxes we should pass the FAIR TAX on GOODs which would do much to improve employment. And as pointed out by Adam Smith in 1776, parents should pick the school that their children go to, the state should just allocate funds to the schools or to the parents based on where the child goes to school. This would also promote employment and reduce government spending on support programs.
• Reply•Share ›
Avatar
John M Legge Rjschundlr • 9 months ago
What on earth are your smoking?
2 • Reply•Share ›
Avatar
Swami DWAnderson • a year ago
They should let DWA write the opinion pieces. More wisdom and nuance in his brief response than this entire rambling and in places dishonest original article.
• Reply•Share ›
Avatar
Garrett Watson • a year ago
The irony here is that it is pro-market thinkers and economists who have been stalwart opponents of the idea that markets are machines. It's the interventionists and technocrats who tend to see societies and economies as mechanistic entities that can be improved through scientific intervention. New Institutional and Public Choice economists have been all about how important the "rules of the game" are to societal and economic well-being, for example.
2 • Reply•Share ›
Avatar
Andrew Garrett Watson • 10 months ago
Thank you. This article - and Evonomics in general - seems to be deeply confused over the applicability of complexity theory and evolution to economics. Many of these articles start with the premise that evolutionary and complexity sciences are interesting and novel frameworks for reevaluating how we think about economies (which they are!), though typically fall far short of taking even a cursory look into what exactly these fields of study entail.

They then somehow end with the bizarre conclusion that the justification for economic intervention flows from complexity science... because... science? It's as though someone said "Hey, this complexity science stuff sounds cool! Lets glue it onto the same command-economy policy advocacy that is already the status quo, except pretend it's something new that's needed to combat a mythical 'free market fundamentalist' straw-man that doesn't even exist!"

The market is itself an evolving organism. To tamper with it through technocratic intervention is to artificially manipulate a natural evolutionary process. That's not to say that a reasonable level of state regulation shouldn't exist, but it's incredibly bizarre to cite the bottom-up, decentralized, self-organizing principals of evolutionary theory as a logical basis for top-down intervention. Economic intervention has more in common with GMOs than natural evolution.
• Reply•Share ›
Avatar
efalken • a year ago
If you really want to change people's minds, don't use straw man arguments like "markets are efficient, thus sacrosanct." Otherwise, you are just preaching to the choir.
2 • Reply•Share ›
Avatar
Pedro Romero • a year ago
This is ideological, too bad
2 • Reply•Share ›
Avatar
KåBe • a year ago
The irony is that as we started to dominate nature and shape her according to our needs, we started treating the economy as a system that was to complicated for us to understand and should be left to govern itself.
1 • Reply•Share ›
Avatar
Brian Gladish • a year ago
Markets are simply evolutionary processes. Are human beings a "perfect" outcome of biological evolution? The idea of perfection requires some normative point of view, which Evonomics displays in spades. Markets aren't "perfect," but just like biological evolution, they are a reality of life. If you don't believe that, just try suppressing them.
1 • Reply•Share ›
Avatar
Πάνος Μάντζαρης • a year ago
More from writers who do not seem to know what it is they are talking about ... :-( (as far as I can tell)
1 • Reply•Share ›
Avatar
philofra • a year ago
The title is interesting: "Complexity
Economics Shows Us Why Laissez-Faire Economics Always Fails"

It sounds a bit too simple. For one, 'laissez faire' doesn't really or completely exist, just like 'free trade' doesn't really or completely exit. Things are always more complex than that, especially when humans are involved. If there had been a total laissez faire in place as the writer suggested the economic crisis of 2008 would have ended everything, since there would have been no back-up systems in place to fix or temper things. The back-up systems that saved a total economic meltdown from happening were constructed of complex mechanisms that had been developed from past knowledge and experiences. Fortunately there were so complex thinkers in charge at the time.

Ironically, though, things were not as simple as this article makes out because here was a 'complexity' involved that helped bring about the financial crisis of 2008. It was made of a complex web of derivatives that few economists understood. Those complex derivatives are what really were behind the meltdown of 2008.
1 • Reply•Share ›
Avatar
John M Legge • a year ago
Excellent contribution. Missing reference: Eric Beinhocker http://www.randomhouse.com....

An easier read if you haven't been cursed with an economics education: http://www.transactionpub.c...
1 • Reply•Share ›
Avatar
David Burns • a year ago
So free market advocates caused all the woes of history, and now we should give government intervention a try at last! Ha! My impression is that the gardeners have been at it from the beginning and the best the machiners have been able to do is prevent them from chopping off all the tomato buds.

Nassim Taleb has given a much better critique of the efficient markets hypothesis. His has the advantage of actually addressing the hypothesis, rather than a parody. His critique focuses on the different behaviors implied by assumptions about the underlying statistical distribution of events. This one seems to mistake the efficient market hypothesis for the perfect market hypothesis.
2 • Reply•Share ›
Avatar
Ted Howard • a year ago
So many great ideas in this post, and also so many mixed up ideas.

Laissez faire works in a sense, it produces the outcomes that it does, its just that in terms of long term survival probabilities, it delivers very low probabilities of long term survival for anyone in the system.
When things are genuinely scarce it works remarkably well (though with the noted problems), but as exponential advances in information processing bring about exponential advances in the domains where universal abundance is possible, the whole notion of markets (being a scarcity based value measure) introduces major instabilities that have a high probability of collapsing the system as a whole at some point (not just recession, but oscillation into a totally competitive modality taking all human life with it).

The article is really good, in as far as it explores complexity, and compares classical equilibrium economics with complexity economics. And that is only taking a single step into the realm of complexity. It is infinite, there are lots more steps.

Wolfram, with NKS, takes a step into the abstract and generalised realms of computation and relationship, but he holds on tightly to the notion of causality. He does go so far as to firmly establish the idea of maximal computational complexity, and thus demonstrate that there are entire classes, even in the most simple of computational systems, of systems that are not reducible to any sort of predictive formula. The only way to see what will happen in such systems is to run them and see. There are many levels of such systems in living systems, and as economics is the study of an aspect of human beings (an instance of a living system) there are many aspects of us that are not predictable in any sense (even if one assumes {as Wolfram does} that they are fully causal).

If one follows Bayes, and the experimental evidence of QM, where they actually seem to lead, then (in the twin contexts of complexity theory and probability theory) Ockham seems to take us to a place where any stochastic system which is constrained in some way to some set of probability distributions, will, in a sufficiently large collection, display sets of properties that deliver outcomes that very closely approximate hard causality.

So Wolfram, Dennett and many others hold firmly to assumptions of hard causality, which have the ultimate outcome of making us automata and invalidating any non-trivial level of morality. Much as I admire both Dennett and Wolfram and many others, the foundational assumptions of causality they cling to do not seem to follow Ockham's dictates.

It seems clear, that in a world that is fundamentally stochastic, fundamentally random to some degree, then it will develop properties that deliver both a close approximation to causality in large collections, and allow for genuine (non-deterministic) freedom to coexist. Thus we can get what we seem to have, both causality and morality (though neither being absolute - though workable in practice).

So what does this have to do with economics?

Economics is in a sense a study of human behaviour - what do we do and why?

Behaviour is about goals and goals can be thought of as deriving from values - but that seems to not actually be how it is.

Actually what seems to be reality, is that at every level, evolution (natural selection, selective survival of variants) seems to select what works (at the genetic level, at the cultural level, and at any other sets of levels that might emerge). So it seems that ultimately, all of our likes and dislikes, all of our morality, our deepest or highest desires, derive from survival at some level of systems.

So when Beinhocker talks of "fitness landscapes" and "individuals and groups cooperate to compete" that is true in a sense, and if taken at face value it leads to a not very useful understanding of evolution.

In order to be useful, the understanding must look at the nature of the strategic environment, the nested levels of context, as well as the nested levels of associated sets of strategies. This applies from the subcellular groupings of RNAs, proteins, DNAs, into cells, organelles, and on up the genetic tree of diversity of life forms. It applies equally to the mimetic and cultural environments that have evolved on top of (and in the context of) those genetic systems. And it applies to those entities that have emerged from that nested set of cosmological, chemical and genetic contexts and the emergent mimetic and cultural contexts.

So in this context, there is some real power in the statement "What I think you think about what I want creates storms of behavior that change what is", and the levels of replicators possible in the system seem to be infinitely extensible, not at all confined to the two reasonably well described ones of genetic and mimetic.

So rather than being like a garden, it is much more like wandering through a TARDIS, that every time you open a door to a room, it grows three new rooms, with three new doors, not just in the room behind the door you just opened, but in every room thus far in existence. This seems to be the nature of the reality we find ourselves in. The Zen Buddhists seem to have captured a flavour of it in their saying "that for the master, on a path worth walking, for every step on the path, the destination grows two steps further away".

In such a reality, it is the values we as free agents choose (in as much as we do choose, and are not simply tools of our unexamined genetic and cultural history) that are of prime importance.

It seems clear to me that individual life must take pride of place, followed closely by individual liberty (our own and everyone else's, in near equal measure) if we are to have any reasonable chance of living long enough to have a reasonable exploration of the infinities available to the enquiring mind. And in that context, Wolfram has clearly shown, that whatever basis one assumes, strictly causal, or more loosely constrained stochastic causal, the principle of maximal computational complexity will be our companion - which guarantees uncertainty and unknowability in practice of many classes of aspects of reality.

So it becomes very clear in logic and reason, that it is time for humanity to acknowledge the real historical utility of markets, the way in which they supported freedom in a context of genuine scarcity, and to move past such scarcity based paradigms into an age of freedom in abundance, where security (in as much as it can exist) is delivered by distributed trust networks, distributed information networks, distributed production networks, and massive redundancy at all levels (to give as much flexibility as possible to respond to the unknowns and unknowables that must logically reside in all complex systems.

The statement "If well-tended, markets produce great results but if untended, they destroy themselves" could arguably be said to be true for most of history up until very recently, when scarcity did genuinely dominate the world of goods and services.

Now that we have the computational ability to access universal abundance, either we go beyond economics, or economics will most probably destroy us - there really isn't any stable middle ground.
see more
1 • Reply•Share ›
Avatar
Aodhain o falluin Ted Howard • a year ago
This was a fantastic reply thank you for sharing
• Reply•Share ›
Avatar
Robert Lapsley Ted Howard • a year ago
I appreciate your post. It shows a well read understanding of complexity. I don't quite understand your idea of universal abundance. I have always felt it is obvious that as we expand our numbers, we will be incrementally and increasingly constrained by natural resources. In addition when considering the importance of structure of the nodes and connections in networks, some ties being more or less critical to the emergent properties of higher levels. I worry that as we continue to expand our economies we also risk cutting the ties that allow for the continued natural services we depend on.

I am really happy to hear mention of "the values we as free agents choose"! This before all else I think is the nut to crack. So... If our ethics and morality are human attributions of value… fundamentally describing our relationships with the world. These relations inform our reason and are responsible for the creative complexities of our modern lives, of all our history. I read you correctly, we could agree that “We” decide what relations are of value to us.
More, We decide the existential first, and principle concern, the foundation, the structural underpinning, the framework, supporting all of our patchwork morals… we humans decide what this should be. And, I hope we could agree that our continued existence on the planet Earth should be our first and principle concern. If so, then all other principles of moral conduct should be rooted here; from here stems our ethics.
I hope you would consider the entailed ethics it requires. If you agree with this first principle, it seems intuitive that there are objective truths that we can reasonably conclude. Like the obvious: we need what the Earth provides. We need water, and food, and air to breath. This is an obvious fact of biology, yet, few ever pause to consider Earth’s ecology is providing us these “natural services”, critical services that allow for our continued existence. Without any one of these, our future is improbable at best. So wouldn't the first principle of our continued existence entail our second principle of “Earth first”? Protecting the Earth’s ecology reduces the existential risk of our loosing the natural services provided. We increase the probability, we inflate the risk with our continued interventions in the processes providing such services.
It follows, economic agency, which serves to benefit the ecosphere, serves the whole of Humanity’s future, and the individual pursuing any interest at the expense of natural systems health, commits a crime against humanity’s future generations. Unfortunately from this my lonely standpoint, the hallowed and certain unalienable rights to “Life, Liberty, and the pursuit of happiness” is now viewed by me as dependent and a lower order concern constrained by the first and second. If man governs his fate best by serving something greater than himself, that something should before all else do no harm to the ecosphere.
see more
• Reply•Share ›
Avatar
David Bolinsky Robert Lapsley • a year ago
Robert Lapsley, If you have not run into the folks who study exponential growth, it would merit a look. There is evidence, with exponential growth in a broad range of important fields (computer science, algorithm design, genomics, proteomics, nanotechnology, solar power, etc) that the scarcity you describe (and which fits perfectly in a linear growth physical world with and exponentially growing population) will disappear, to be replaced by a universal abundance of food, power, clean water and universal education (to name but a few), all in less time than linear experience would predict. Peter Diamandis' book 'Abundance' is a cool place to start and you can great ideas keenly detailed in Ramez Naam's 'The Infinite Resource: The Power of Ideas on a Finite Planet.'
• Reply•Share ›
Avatar
Ted Howard Robert Lapsley • a year ago
Hi Robert,
The assumption that expanding numbers will create resource constraint is based upon the assumption of static technology. Our technical capacity is expanding at a far greater exponential than our population (population is currently about 3% per annum and dropping, while computation is 120% per annum and increasing). Thus we are finding ways to do more with less, faster than our need for more is increasing.

So yes, there are physical limits, and right now, we still have quite a bit of room inside the technology curve before we reach those limits.

When you look back into history, the hunter gatherer technology of sustaining humans required over a million square meters per person. As we developed agriculture that number has dropped, and as we refine agriculture and molecular level manufacturing even further, those number will drop further still. As we grow fruits and vegetables under cover, in small isolated units, we can isolate from insects and birds and mammals, and as filter technologies develop we will shortly even be able to remove airborne spores. Those two abilities remove the need for chemical sprays (which are low levels toxins for us, but much better than starvation).

The actual limit for humans, by living under our rooftop gardens and solar panels, on top of our water storage and deep underground recycling and high speed transport networks, seems to be around 500 square meters per person (for food and energy enough to live what most of us would consider a high standard of living - with energy efficient high tech giving us serving robots to do all the maintenance work on house and garden, and all the cooking cleaning etc required to maintain everything (including themselves). We're not quite at the level of being able to produce all that technology right now, and it will certainly be available within 20 years, so we had better be prepared for it.
At that limit of technology, and leaving half the land surface for natural ecosystems, while modifying the rest for human use, we could sustain 10 billion people using half the land surface, none of the water, and having sufficient reserves to survive a loss of half the capacity to natural disaster.

If we were to use 20% of the ocean for food and energy farms for coastal megacities we could easily double that population. So about 20 billion is a practical limit if we are ensuring that every individual has the sorts of freedom of travel, communication and manufacture that I as the CEO of a software company consider reasonable.

Everything has to do with the technologies we employ. Currently most of our manufacturing technology is based on scales that suit human supervised tools. When we have molecular level nano-scale manufacturing and resource recycling, then everything changes.

So yes - we need to consider the ecosystem we live with, and we don't need to be reliant upon them, we can isolate and optimise those aspects we need for survival purposes, while retaining the rest for enjoyment purposes. And I have been a lifelong conservationist, have studied ecology and biochemistry at university, currently chair our district zone water management committee, and a member of our regional biodiversity committee, and chair the Huttons Shearwater Charitable Trust, and have over 40 years involvement in fisheries management - so I don't just write about this stuff, I do it in practice, and am all too conscious of the practical issues we face, and the technological advances needed to address them, and the inadequacy of free market incentives to do that job.

And yes - it seems clear to me that ultimately all of our values resolve back to survival at some level, through some chain of genetic or mimetic evolutionary linkages.
And yes, I am certainly all about systems that enhance survival probabilities for humans first and foremost, and for most other life forms also (not for those that pose significant direct threats to human survival).

And also yes - clean air, with about 20% oxygen, clean fresh water, adequate nutritious, tasty, safe food are all prerequisites for a full life, as are health care, education, transportation, tools, shelter, information, and general security (freedom from threats).

I have some issues with the "pursuit of happiness", in as much as when you look deeply into the origins of happiness, at the genetic and cultural levels of evolution, through the many biochemical pathways, it seems to be various sets of survival directives averaged over the conditions present in the deep time of our genetic and cultural evolution. Little in that deep past is compatible our exponentially changing technological present, so many of the things that worked for our deep ancestors no longer work for us. So the default setting for "happiness" can lead us seriously astray (like drinking sugar water drinks) And with knowledge we can manage such things, and with that provisio, certainly, the pursuit of whatever we reasonably choose, within the context of the survival of ourselves and all others, and within the context of the greatest level of reasonable freedom we can supply to all.

I am not about serving anything.

I am about individual survival and individual liberty, and that comes in a context of cosmology, chemistry, evolution, and the existence of a large set of other sapient entities (human and non-human, biological and non-biological) with the same rights to life and liberty as myself. The complex system that results necessarily has flexible context sensitive boundaries at every dimension of interaction.
see more
• Reply•Share ›
Avatar
Andy White • a month ago
A far better comparison is a MIXED FARM than a Garden (a Farm is a working garden happening on a far larger scale and with specific function and purpose.... Mixed farming is about mixing animal husbandry with agriculture and horticulture.)

"Markets are a type of ecosystem that is complex, adaptive, and subject to the same evolutionary forces as nature."

This quote is what Daniel Dennett would call a Deepity, something that is superficially correct, but some thing that if where actually true would make the world vastly different to what it is.

"The markets" and "the Garden metaphor" both make out that economics is some "naturalistic reality" and can be considered using evolutionary thinking/theory. When the stark reality is that markets are entirely artificial creations (such as the farm to mark it as distinct from a garden - the working realities if farming and desired outcome make them distinctly "anti-natural").

Understanding the failure of liaise faire economics requires grasping on a deeper level than this article outlines the fundamental/ideological - PHILOSOPHICAL failure to understand human nature, or what society is and how civilisation actually functions.

To give a "brief" - money is a fantastic TOOL in that it allows for the organisation of human interaction that allows for the specialisation and interaction of different elements of society. In order for this to function "the system" needs to be controlled, regulated and policed to "stay on task" in providing both "a level playing field, and coherent/fair set of rules as well as a divisioning of teams into different leagues to protect community /local /regional/ national/ international and global interests".

Please note my switching from Farming to Sporting is intentional as it relates to the difference between "main st" and "wall st" economics - between economic and financial interests.
• Reply•Share ›
Avatar
ChrisTavareIsMyIdol • 6 months ago
Utter nonsense. traditional economics explains what has happened in the USA. When jobs are exported and you have massive levels of immigration, wages for the poor fall
• Reply•Share ›
Avatar
Kai • 8 months ago
I like the article - thank you very much for that! The metapher of the garden could possibly exchanged for another metapher. The major argument behind it is that we should steer the economy because otherwise the maximization of short term gains leads to catastrophes that might be called automatic corrections that certainly no one wants. Here is a cause and effect model that shows the logical need for a market intervention: https://www.know-why.net/mo...
• Reply•Share ›
Avatar
Jordan • a year ago
Generally, love the ideas presented here. Thank you.

However, in closing, the article all but equates wealth with innovation. This ignores the extreme problem which currently exists with hoarded, unshared, accumulated wealth: i.e.- assets which are protected by the exclusionary nature of ownership claims and held out of social circulation for the sole benefit of the owners..

When so many trillions are off-shored, and so much accumulated wealth is claimed by so few individuals, it is not sufficient to merely address the fairness of "new" wealth creation. The immorality and theft committed by prior generations is the most obstructive and damaging weed in the garden and it needs to be plucked out.
• Reply•Share ›
−
Avatar
Robert Lapsley • a year ago
The garden metaphor might be a rhetorical fallacy. These biased adjectives like “unhealthy”, (“tends toward unhealthy imbalances.”) imply intention. We resist entertaining the amoral nature of the dynamics of complex systems. I don't insist human agency has no effect on evolutions. But our ideologies are narrow, soft and imperfect. Human attribution of value need not align with the natural world’s evolution. Surprised we are when they differ, read “unintended consequences”. If no intended exist then any winding way is as good or bad as the next. We have been describing nature of systems dynamics for some time now, and if I am not off the rails, I would suggest that every possible pathway forward into the future is pressed. Every niche where potential can unwind, will be filled. Those extant manifestations that “survive”, those moves allowed for, encounter new constraints which open and close possible futures moves.
My point is we take care when characterizing the dynamics of our economies as value laden; our mores, ethics and morality are riding on top of natural systems.



Тема Re: Вот почему женщины в Китае не болеют раком грудинови [re: Mod vege]  
Автор | (>[2] /dev/null)
Публикувано11.04.17 15:30



Продължаваш да се опитваш да размиваш темата, само и единствено да не признаеш, че си пуснал идиотщина, която няма общо с реалността. Че не си учен е ясно, но н само учените имат доблестта да си признават грешките.

The last good thing written in C was Franz Schubert's Symphony No. 9.


Тема The Japanese practice of 'forest bathing'- scienceнови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано12.04.17 02:05




This article is published in collaboration with .


Research has shown the health benefits of 'forest bathing', the act of being among the trees.

The tonic of the wilderness was Henry David Thoreau’s classic prescription for civilization and its discontents, offered in the 1854 essay Walden: Or, Life in the Woods. Now there’s scientific evidence . The Japanese practice of is proven to lower heart rate and blood pressure, reduce stress hormone production, boost the immune system, and improve overall feelings of wellbeing.

Forest bathing—basically just being in the presence of trees—became part of a national public health program in Japan when the forestry ministry coined the phrase shinrin-yoku and promoted topiary as therapy. Nature appreciation—picnicking en masse under the cherry blossoms, for example—is a national pastime in Japan, so forest bathing quickly took. The environment’s wisdom has long been evident to the culture: ’s Zen masters asked: If a tree falls in the forest and no one hears, does it make a sound?

To discover the answer, masters do nothing, and gain illumination. Forest bathing works similarly: Just be with trees. No hiking, no counting steps on a Fitbit. You can sit or meander, but the point is to relax rather than accomplish anything.

Forest air doesn’t just feel fresher and better—inhaling phytoncide seems to actually improve immune system function.

“Don’t effort,” says Gregg Berman, a registered nurse, wilderness expert, and in California. He’s leading a small group on the Big Trees Trail in Oakland one cool October afternoon, barefoot among the redwoods. Berman tells the group—wearing shoes—that the human nervous system is both of nature and attuned to it. Planes roar overhead as the forest bathers wander slowly, quietly, under the green cathedral of trees.

From 2004 to 2012, Japanese officials spent about studying the physiological and psychological effects of forest bathing, designating 48 therapy trails based on the results. Qing Li, a professor at Nippon Medical School in Tokyo, measured the activity of human natural killer (NK) cells in the immune system before and after exposure to the woods. These cells provide rapid responses to viral-infected cells and respond to tumor formation, and are associated with immune system health and cancer prevention. In a Li’s subjects showed significant increases in NK cell activity in the week after a forest visit, and positive effects lasted a month following each weekend in the woods.

This is due to various essential oils, generally called phytoncide, found in wood, plants, and some fruit and vegetables, which trees emit to protect themselves from germs and insects. Forest air doesn’t just feel —inhaling phytoncide seems to actually improve immune system function.

Experiments on conducted by the Center for Environment, Health and Field Sciences in Japan’s Chiba University measured its physiological effects on 280 subjects in their early 20s. The team measured the subjects’ salivary cortisol (which increases with stress), blood pressure, pulse rate, and heart rate variability during a day in the city and compared those to the same biometrics taken during a day with a 30-minute forest visit. “Forest environments promote lower concentrations of cortisol, lower pulse rate, lower blood pressure, greater parasympathetic nerve activity, and lower sympathetic nerve activity than do city environments,” the study concluded.

In other words, being in nature made subjects, physiologically, less amped. The parasympathetic nerve system controls the body’s rest-and-digest system while the sympathetic nerve system governs fight-or-flight responses. Subjects were more rested and less inclined to stress after a forest bath.

Trees soothe the spirit too. A study on forest bathing’s surveyed 498 healthy volunteers, twice in a forest and twice in control environments. The subjects showed significantly reduced hostility and depression scores, coupled with increased liveliness, after exposure to trees. “Accordingly,” the researchers wrote, “forest environments can be viewed as therapeutic landscapes.”

Berman advised the forest bathers to pick up a rock, put a problem in and drop it. “You can pick up your troubles again when you leave,” he said with a straight face.

City dwellers can benefit from the effects of trees with just a visit to the park. Brief exposure to greenery in can relieve stress levels, and experts have “doses of nature” as part of treatment of attention disorders in children. What all of this evidence suggests is we don’t seem to need a lot of exposure to gain from nature—but regular contact appears to improve our immune system function and our wellbeing.

Julia Plevin, a product designer and urban forest bather, founded San Francisco’s 200-member Forest Bathing Club Meetup in 2014. They gather monthly to escape technology. “It’s an immersive experience,” Plevin explained to Quartz. “So much of our lives are spent interacting with 2D screens. This is such a bummer because there’s a whole 3D world out there! Forest bathing is a and computer…from all that noise of social media and email.”

Before we crossed the threshold into the woods in Oakland, Berman advised the forest bathers to pick up a rock, put a problem in and drop it. “You can pick up your troubles again when you leave,” he said with a straight face. But after two hours of forest bathing, no one does.

Joy Chiu, a leadership and life coach on the forest bath led by Berman, explained that this perspective on problems lasts long after a bath, and that she returns to the peace of the forest when she’s far from here, feeling harried. “It’s grounding and I go back to the calm feeling of being here. It’s not like a time capsule, but something I can continually return to.”

(Facebook)



Тема Re: Вот почему женщины в Китае не болеют раком грудинови [re: |]  
Автор Mod vegeМодератор (старо куче)
Публикувано13.04.17 00:38




Че arguemtnum ad hominem ти е любим, видяхме всички. Така не се води сериозна дискусия, но Дирът явно повече не може.

Редактирано от Mod vege на 13.04.17 02:20.



Тема Берлускони спасява агнета и ядосва месопроизводитенови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано13.04.17 00:40





Бизнесменът и бивш италиански премиер Силвио Берлускони разгневи представителите на месната индустрия на Апенините, след като се появи във видео, в което „спасява” агънца и призовава за вегетарианска трапеза на Великден, съобщава РИА „Новости”

Видеоклипът на италианската Лига за защита на околната среда, в който Берлускони прегръща и храни агънца с биберон стана много популярен в мрежата. Политикът е сниман на фона на надпис „Защитавай живота, избери вегетариански Великден”.

Агнешкото месо за Великден е част от традицията в Италия, но през последните години я спазват все по-малко хора заради икономическата криза и активните действия на защитниците на животните.

„Невероятно е, но с деловите си умения той нанася щети на месната промишленост и се опитва да получи подкрепата на любителите на животни” изтъква в съобщение италианската организация на производителите на месни продукти „Асокарни”.
През 2016 година редица вестници в Италия съобщиха, че Берлускони е станал вегетарианец. Самият политик каза, че никога не е заявявал подобно нещо, но избегна категоричен отговор на въпроса дали още яде месо.



Тема Re: Вот почему женщины в Китае не болеют раком грудинови [re: Mod vege]  
Автор | (>[2] /dev/null)
Публикувано13.04.17 00:44



Защо ли ми се струва, че само за ad hominem си чувал... :) Шопенхауер е написал едно книжле, на английски името му е "The Art of Being Right: 38 Ways to Win an Argument". Може да го попрегледаш, да се образоваш малко.

Ако искаш сериозна дискусия трябва първо да се научиш да признаваш когато сгрешиш и да престанеш да пускаш идиотщини от пропагандни сайтчета.

Та, кажи сега, какво казва научната литература. Има ли доказана вреда на млечните продукти за рак на гърдата или не. Май не, а? :)

The last good thing written in C was Franz Schubert's Symphony No. 9.

Редактирано от | на 13.04.17 00:45.



Тема Matter Conscious? Neuroscience mirrored in physicsнови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано13.04.17 02:12




Why the central problem in neuroscience is mirrored in physics.
BY HEDDA HASSEL MØRCH


The nature of consciousness seems to be unique among scientific puzzles. Not only do neuroscientists have no fundamental explanation for how it arises from physical states of the brain, we are not even sure whether we ever will. Astronomers wonder what dark matter is, geologists seek the origins of life, and biologists try to understand cancer—all difficult problems, of course, yet at least we have some idea of how to go about investigating them and rough conceptions of what their solutions could look like. Our first-person experience, on the other hand, lies beyond the traditional methods of science. Following the philosopher David Chalmers, we call it the hard problem of consciousness.

But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter. What is physical matter in and of itself, behind the mathematical structure described by physics? This problem, too, seems to lie beyond the traditional methods of science, because all we can observe is what matter does, not what it is in itself—the “software” of the universe but not its ultimate “hardware.” On the surface, these problems seem entirely separate. But a closer look reveals that they might be deeply connected.



Consciousness is a multifaceted phenomenon, but subjective experience is its most puzzling aspect. Our brains do not merely seem to gather and process information. They do not merely undergo biochemical processes. Rather, they create a vivid series of feelings and experiences, such as seeing red, feeling hungry, or being baffled about philosophy. There is something that it’s like to be you, and no one else can ever know that as directly as you do.

Our own consciousness involves a complex array of sensations, emotions, desires, and thoughts. But, in principle, conscious experiences may be very simple. An animal that feels an immediate pain or an instinctive urge or desire, even without reflecting on it, would also be conscious. Our own consciousness is also usually consciousness of something—it involves awareness or contemplation of things in the world, abstract ideas, or the self. But someone who is dreaming an incoherent dream or hallucinating wildly would still be conscious in the sense of having some kind of subjective experience, even though they are not conscious of anything in particular.

Philosophers and neuroscientists often assume that consciousness is like software, whereas the brain is like hardware.

Where does consciousness—in this most general sense—come from? Modern science has given us good reason to believe that our consciousness is rooted in the physics and chemistry of the brain, as opposed to anything immaterial or transcendental. In order to get a conscious system, all we need is physical matter. Put it together in the right way, as in the brain, and consciousness will appear. But how and why can consciousness result merely from putting together non-conscious matter in certain complex ways?

This problem is distinctively hard because its solution cannot be determined by means of experiment and observation alone. Through increasingly sophisticated experiments and advanced neuroimaging technology, neuroscience is giving us better and better maps of what kinds of conscious experiences depend on what kinds of physical brain states. Neuroscience might also eventually be able to tell us what all of our conscious brain states have in common: for example, that they have high levels of integrated information (per Giulio Tononi’s Integrated Information Theory), that they broadcast a message in the brain (per Bernard Baars’ Global Workspace Theory), or that they generate 40-hertz oscillations (per an early proposal by Francis Crick and Christof Koch). But in all these theories, the hard problem remains. How and why does a system that integrates information, broadcasts a message, or oscillates at 40 hertz feel pain or delight? The appearance of consciousness from mere physical complexity seems equally mysterious no matter what precise form the complexity takes.

Nor would it seem to help to discover the concrete biochemical, and ultimately physical, details that underlie this complexity. No matter how precisely we could specify the mechanisms underlying, for example, the perception and recognition of tomatoes, we could still ask: Why is this process accompanied by the subjective experience of red, or any experience at all? Why couldn’t we have just the physical process, but no consciousness?

Other natural phenomena, from dark matter to life, as puzzling as they may be, don’t seem nearly as intractable. In principle, we can see that understanding them is fundamentally a matter of gathering more physical detail: building better telescopes and other instruments, designing better experiments, or noticing new laws and patterns in the data we already have. If we were somehow granted knowledge of every physical detail and pattern in the universe, we would not expect these problems to persist. They would dissolve in the same way the problem of heritability dissolved upon the discovery of the physical details of DNA. But the hard problem of consciousness would seem to persist even given knowledge of every imaginable kind of physical detail.

In this way, the deep nature of consciousness appears to lie beyond scientific reach. We take it for granted, however, that physics can in principle tell us everything there is to know about the nature of physical matter. Physics tells us that matter is made of particles and fields, which have properties such as mass, charge, and spin. Physics may not yet have discovered all the fundamental properties of matter, but it is getting closer.

Yet there is reason to believe that there must be more to matter than what physics tells us. Broadly speaking, physics tells us what fundamental particles do or how they relate to other things, but nothing about how they are in themselves, independently of other things.

Charge, for example, is the property of repelling other particles with the same charge and attracting particles with the opposite charge. In other words, charge is a way of relating to other particles. Similarly, mass is the property of responding to applied forces and of gravitationally attracting other particles with mass, which might in turn be described as curving spacetime or interacting with the Higgs field. These are also things that particles do or ways of relating to other particles and to spacetime.

Conscious experiences are just the kind of things that physical structure could be the structure of.

In general, it seems all fundamental physical properties can be described mathematically. Galileo, the father of modern science, famously professed that the great book of nature is written in the language of mathematics. Yet mathematics is a language with distinct limitations. It can only describe abstract structures and relations. For example, all we know about numbers is how they relate to the other numbers and other mathematical objects—that is, what they “do,” the rules they follow when added, multiplied, and so on. Similarly, all we know about a geometrical object such as a node in a graph is its relations to other nodes. In the same way, a purely mathematical physics can tell us only about the relations between physical entities or the rules that govern their behavior.

One might wonder how physical particles are, independently of what they do or how they relate to other things. What are physical things like in themselves, or intrinsically? Some have argued that there is nothing more to particles than their relations, but intuition rebels at this claim. For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air. In other words, physical structure must be realized or implemented by some stuff or substance that is itself not purely structural. Otherwise, there would be no clear difference between physical and mere mathematical structure, or between the concrete universe and a mere abstraction. But what could this stuff that realizes or implements physical structure be, and what are the intrinsic, non-structural properties that characterize it? This problem is a close descendant of Kant’s classic problem of knowledge of things-in-themselves. The philosopher Galen Strawson has called it the hard problem of matter.

It is ironic, because we usually think of physics as describing the hardware of the universe—the real, concrete stuff. But in fact physical matter (at least the aspect that physics tells us about) is more like software: a logical and mathematical structure. According to the hard problem of matter, this software needs some hardware to implement it. Physicists have brilliantly reverse-engineered the algorithms—or the source code—of the universe, but left out their concrete implementation.

The hard problem of matter is distinct from other problems of interpretation in physics. Current physics presents puzzles, such as: How can matter be both particle-like and wave-like? What is quantum wavefunction collapse? Are continuous fields or discrete individuals more fundamental? But these are all questions of how to properly conceive of the structure of reality. The hard problem of matter would arise even if we had answers to all such questions about structure. No matter what structure we are talking about, from the most bizarre and unusual to the perfectly intuitive, there will be a question of how it is non-structurally implemented.

Indeed, the problem arises even for Newtonian physics, which describes the structure of reality in a way that makes perfect intuitive sense. Roughly speaking, Newtonian physics says that matter consists of solid particles that interact either by bumping into each other or by gravitationally attracting each other. But what is the intrinsic nature of the stuff that behaves in this simple and intuitive way? What is the hardware that implements the software of Newton’s equations? One might think the answer is simple: It is implemented by solid particles. But solidity is just the behavior of resisting intrusion and spatial overlap by other particles—that is, another mere relation to other particles and space. The hard problem of matter arises for any structural description of reality no matter how clear and intuitive at the structural level.

Like the hard problem of consciousness, the hard problem of matter cannot be solved by experiment and observation or by gathering more physical detail. This will only reveal more structure, at least as long as physics remains a discipline dedicated to capturing reality in mathematical terms.



Might the hard problem of consciousness and the hard problem of matter be connected? There is already a tradition for connecting problems in physics with the problem of consciousness, namely in the area of quantum theories of consciousness. Such theories are sometimes disparaged as fallaciously inferring that because quantum physics and consciousness are both mysterious, together they will somehow be less so. The idea of a connection between the hard problem of consciousness and the hard problem of matter could be criticized on the same grounds. Yet a closer look reveals that these two problems are complementary in a much deeper and more determinate way. One of the first philosophers to notice the connection was Leibniz all the way back in the late 17th century, but the precise modern version of the idea is due to Bertrand Russell. Recently, contemporary philosophers including Chalmers and Strawson have rediscovered it. It goes like this.

The hard problem of matter calls for non-structural properties, and consciousness is the one phenomenon we know that might meet this need. Consciousness is full of qualitative properties, from the redness of red and the discomfort of hunger to the phenomenology of thought. Such experiences, or “qualia,” may have internal structure, but there is more to them than structure. We know something about what conscious experiences are like in and of themselves, not just how they function and relate to other properties.

For example, think of someone who has never seen any red objects and has never been told that the color red exists. That person knows nothing about how redness relates to brain states, to physical objects such as tomatoes, or to wavelengths of light, nor how it relates to other colors (for example, that it’s similar to orange but very different from green). One day, the person spontaneously hallucinates a big red patch. It seems this person will thereby learn what redness is like, even though he or she doesn’t know any of its relations to other things. The knowledge he or she acquires will be non-relational knowledge of what redness is like in and of itself.

This suggests that consciousness—of some primitive and rudimentary form—is the hardware that the software described by physics runs on. The physical world can be conceived of as a structure of conscious experiences. Our own richly textured experiences implement the physical relations that make up our brains. Some simple, elementary forms of experiences implement the relations that make up fundamental particles. Take an electron, for example. What an electron does is to attract, repel, and otherwise relate to other entities in accordance with fundamental physical equations. What performs this behavior, we might think, is simply a stream of tiny electron experiences. Electrons and other particles can be thought of as mental beings with physical powers; as streams of experience in physical relations to other streams of experience.



This idea sounds strange, even mystical, but it comes out of a careful line of thought about the limitations of science. Leibniz and Russell were determined scientific rationalists—as evidenced by their own immortal contributions to physics, logic, and mathematics—but equally deeply committed to the reality and uniqueness of consciousness. They concluded that in order to give both phenomena their proper due, a radical change of thinking is required.

And a radical change it truly is. Philosophers and neuroscientists often assume that consciousness is like software, whereas the brain is like hardware. This suggestion turns this completely around. When we look at what physics tells us about the brain, we actually just find software—purely a set of relations—all the way down. And consciousness is in fact more like hardware, because of its distinctly qualitative, non-structural properties. For this reason, conscious experiences are just the kind of things that physical structure could be the structure of.

Given this solution to the hard problem of matter, the hard problem of consciousness all but dissolves. There is no longer any question of how consciousness arises from non-conscious matter, because all matter is intrinsically conscious. There is no longer a question of how consciousness depends on matter, because it is matter that depends on consciousness—as relations depend on relata, structure depends on realizer, or software on hardware.

One might object that this is plain anthropomorphism, an illegitimate projection of human qualities on nature. After all, why do we think that physical structure needs some intrinsic realizer? Is it not because our own brains have intrinsic, conscious properties, and we like to think of nature in familiar terms? But this objection does not hold. The idea that intrinsic properties are needed to distinguish real and concrete from mere abstract structure is entirely independent of consciousness. Moreover, the charge of anthropomorphism can be met by a countercharge of human exceptionalism. If the brain is indeed entirely material, why should it be so different from the rest of matter when it comes to intrinsic properties?



This view, that consciousness constitutes the intrinsic aspect of physical reality, goes by many different names, but one of the most descriptive is “dual-aspect monism.” Monism contrasts with dualism, the view that consciousness and matter are fundamentally different substances or kinds of stuff. Dualism is widely regarded as scientifically implausible, because science shows no evidence of any non-physical forces that influence the brain.

Monism holds that all of reality is made of the same kind of stuff. It comes in several varieties. The most common monistic view is physicalism (also known as materialism), the view that everything is made of physical stuff, which only has one aspect, the one revealed by physics. This is the predominant view among philosophers and scientists today. According to physicalism, a complete, purely physical description of reality leaves nothing out. But according to the hard problem of consciousness, any purely physical description of a conscious system such as the brain at least appears to leave something out: It could never fully capture what it is like to be that system. That is to say, it captures the objective but not the subjective aspects of consciousness: the brain function, but not our inner mental life.

In order to give both phenomena their proper due, a radical change of thinking is required.

Russell’s dual-aspect monism tries to fill in this deficiency. It accepts that the brain is a material system that behaves in accordance with the laws of physics. But it adds another, intrinsic aspect to matter which is hidden from the extrinsic, third-person perspective of physics and which therefore cannot be captured by any purely physical description. But although this intrinsic aspect eludes our physical theories, it does not elude our inner observations. Our own consciousness constitutes the intrinsic aspect of the brain, and this is our clue to the intrinsic aspect of other physical things. To paraphrase Arthur Schopenhauer’s succinct response to Kant: We can know the thing-in-itself because we are it.

Dual-aspect monism comes in moderate and radical forms. Moderate versions take the intrinsic aspect of matter to consist of so-called protoconscious or “neutral” properties: properties that are unknown to science, but also different from consciousness. The nature of such neither-mental-nor-physical properties seems quite mysterious. Like the aforementioned quantum theories of consciousness, moderate dual-aspect monism can therefore be accused of merely adding one mystery to another and expecting them to cancel out.

The most radical version of dual-aspect monism takes the intrinsic aspect of reality to consist of consciousness itself. This is decidedly not the same as subjective idealism, the view that the physical world is merely a structure within human consciousness, and that the external world is in some sense an illusion. According to dual-aspect monism, the external world exists entirely independently of human consciousness. But it would not exist independently of any kind of consciousness, because all physical things are associated with some form of consciousness of their own, as their own intrinsic realizer, or hardware.

As a solution to the hard problem of consciousness, dual-aspect monism faces objections of its own. The most common objection is that it results in panpsychism, the view that all things are associated with some form of consciousness. To critics, it’s just too implausible that fundamental particles are conscious. And indeed this idea takes some getting used to. But consider the alternatives. Dualism looks implausible on scientific grounds. Physicalism takes the objective, scientifically accessible aspect of reality to be the only reality, which arguably implies that the subjective aspect of consciousness is an illusion. Maybe so—but shouldn’t we be more confident that we are conscious, in the full subjective sense, than that particles are not?

A second important objection is the so-called combination problem. How and why does the complex, unified consciousness of our brains result from putting together particles with simple consciousness? This question looks suspiciously similar to the original hard problem. I and other defenders of panpsychism have argued that the combination problem is nevertheless not as hard as the original hard problem. In some ways, it is easier to see how to get one form of conscious matter (such as a conscious brain) from another form of conscious matter (such as a set of conscious particles) than how to get conscious matter from non-conscious matter. But many find this unconvincing. Perhaps it is just a matter of time, though. The original hard problem, in one form or another, has been pondered by philosophers for centuries. The combination problem has received much less attention, which gives more hope for a yet undiscovered solution.

The possibility that consciousness is the real concrete stuff of reality, the fundamental hardware that implements the software of our physical theories, is a radical idea. It completely inverts our ordinary picture of reality in a way that can be difficult to fully grasp. But it may solve two of the hardest problems in science and philosophy at once.

Hedda Hassel Mørch is a Norwegian philosopher and postdoctoral researcher hosted by the Center for Mind, Brain, and Consciousness at NYU. She works on the combination problem and other topics related to dual-aspect monism and panpsychism.



Тема Mandelbaum,Hossenfelder:probably no comp simulat.нови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано13.04.17 02:34









Science doesn’t have all the answers. There are plenty of things it may never prove, like whether there’s a God. Or whether we’re living in a computer simulation, something by Swedish philosopher Nick Bostrom and others, and maybe your stoned friend Chad last week.

This kind of thinking made at least one person angry, theoretical physicist and from the Frankfurt Institute for Advanced Studies in Germany. Last week, she took to her blog to vent. It’s not the statement “we’re living in a simulation” that upsets Hossenfelder. It’s the fact that philosophers are making assertions that, if true, should most certainly manifest themselves in our laws of physics.

“I’m not saying it’s impossible,” Hossenfelder told Gizmodo. “But I want to see some backup for this claim.” Backup to prove such a claim would require a lot of work and a lot of math, enough to solve some of the most complex problems in theoretical physics.

So, you’d like to go prove that the universe is actually a simulation built by some programmer. No, you’re not religious and you’re not saying that God created the universe! You’re just saying that some all-knowing higher power designed the universe and life in his image, which you think is completely different. Let’s start with the assumption that ‘computer simulation’ means we’re living in a universe where all of space and time is based on discrete bits of data like a computer, with 1s and 0s.

That would require everything in the universe, at its smallest scale, has some definite property, some obvious state of yes or no. We already know that isn’t true, explained Hossenfelder. There are few definite things in quantum mechanics, only a probabilities. Elementary particles like electrons have a property called spin, for example. Quantum mechanics says that if we’re not looking at the particles, we can’t say what their spin value is, we can only model the probability of each spin value. That’s what Schrödinger’s cat is all about. If some process determined by quantum mechanics, like radioactive decay, was in charge of whether a cat in a closed box was alive or dead, then our present understanding of physics implies the cat is both alive and dead simultaneously, until we open the box to take a look. Quantum mechanics and the classical bits computers are based on don’t get along.

“I’m not saying it’s impossible. But I want to see some backup for this claim.”
___
If you expand the problem, you can code lots of classical bits, whose values are fixed, into quantum bits. Quantum bits don’t have a definite value of zero or one, but instead have some probability that they could assume either value. One physicist, Xiao-Gang Wen at the Perimeter Institute, has tried to model this , explaining the universe as being made of these “qubits.” Hossenfelder said that Wen’s models seem to mostly agree with Standard Model of physics, the mathematics behind all of our particles, but still can’t get his models to correctly predict relativity. And, “he’s not claiming we’re living in a computer simulation,” said Hossenfelder. He’s just describing a qubit-based universe.

Any proof that we’re living in a simulation would also need to re-derive all of the laws of particle physics (and special and general relativity) using a different interpretation of quantum mechanics than what our current laws are based on, in a way that perfectly describes our universe. There are people actually devoting their lives to doing these things, and so far, it’s not adding up.

Theoretical computer scientist Scott Aaronson said that there are still theories combining gravity and quantum mechanics that might work if the universe were made with quantum bits—so if you’d like to solve one of the toughest problems in theoretical physics, definitely give it a go. Aaronson was more in the why-does-it-matter camp when it comes to the question of whether we’re in a computer simulation or not. “Why not simplify the theory by cutting the aliens out of it,” he asked, “if they’re not really adding anything to the predictions?” Essentially, the aliens or master programmer is invoking some sort of higher being to explain away our problems. And if our theories work without us living in a simulation, why do we need the simulation explanation at all?

The real shame of the whole issue is that modifying the question can make for some really interesting research questions. For instance, can the rules of computing can create a simulation like the universe? A universe like ours would potentially require 10^122 qubits, said Aaronson. (That’s 1 with 122 zeroes after it, a meaninglessly large number—there are 10^80 atoms in the universe, approximately). And can the universe solve the , that is, can the universe calculate its own end, something computer programs can’t do?

Ultimately, someone who believes the universe is a simulation can just alter the simulation’s parameters so they’re always right. But that’s not science, it’s religion with aliens or a master programmer instead of God, and more boring because there aren’t any fun songs or tasty food rituals.

So, neither Gizmodo nor Hossenfelder nor Aaronson are saying we do or don’t live in a simulation. We are saying that if you think you can prove it, you’ll need a lot more than some hand-waving or philosophical waxing. You’ll need some hard evidence that the fabric of the universe itself works the same way a computer does, and agrees with all of our most complex laws of physics.

“I don’t want to discourage anyone from trying,” said Hossenfelder. “But what annoys me much more is this general dismissal of the theories that we have already.”

___

[]



by

According to Nick Bostrom of the Future of Humanity Institute, [/url=http://backreaction.blogspot.com/2015/01/do-we-live-in-computer-simulation.html]it is likely that we live in a computer simulation[/url]. And is that the superintelligence running our simulation shuts it down.

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything - it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

First, to get it out of the way, there’s a trivial way in which the simulation hypothesis is correct: You could just interpret the presently accepted theories to mean that our universe computes the laws of nature. Then it’s tautologically true that we live in a computer simulation. It’s also a meaningless statement.

A stricter way to speak of the computational universe is to make more precise what is meant by ‘computing.’ You could say, for example, that the universe is made of bits and an algorithm encodes an ordered time-series which is executed on these bits. Good - but already we’re deep in the realm of physics.

If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. This might be somebody’s universe, maybe, but not ours. You either have to overthrow quantum mechanics (good luck), or you have to use qubits. [Note added for clarity: You might be able to get quantum mechanics from a classical, nonlocal approach, but nobody knows how to get quantum field theory from that.]

Even from qubits, however, nobody’s been able to recover the presently accepted fundamental theories – general relativity and the standard model of particle physics. The , but they are still far away from getting back general relativity. It’s not easy.

Indeed, there are good reasons to believe it’s not possible. The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. The effects of violating the symmetries of special relativity aren’t necessarily small and have been looked for – and nothing’s been found.

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation.

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

So, yes, I think artificial consciousness is on the horizon. I also think it’s possible to convince a mind with cognitive abilities comparable to that of humans that their environment is not what they believe it is. Easy enough to put the artificial brain in a metaphoric vat: If you don’t give it any input, it would never be any wiser. But that’s not the environment I experience and, if you read this, it’s not the environment you experience either. We have a lot of observations. And it’s not easy to consistently compute all the data we have.

Besides, if the reason you build an artificial intelligences is consultation, making them believe reality is not what it seems is about the last thing you’d want.

Hence, the first major problem with the simulation hypothesis is to consistently create all the data which we observe by any means other than the standard model and general relativity – because these are, for all we know, not compatible with the universe-as-a-computer.

Maybe you want to argue it is only you alone who is being simulated, and I am merely another part of the simulation. I’m quite sympathetic to this reincarnation of solipsism, for sometimes my best attempt of explaining the world is that it’s all an artifact of my subconscious nightmares. But the one-brain-only idea doesn’t work if you want to claim that it is likely we live in a computer simulation.

To claim it is likely we are simulated, the number of simulated conscious minds must vastly outnumber those of non-simulated minds. This means the programmer will have to create a lot of brains. Now, they could separately simulate all these brains and try to fake an environment with other brains for each, but that would be nonsensical. The computationally more efficient way to convince one brain that the other brains are “real” is to combine them in one simulation.

Then, however, you get simulated societies that, like ours, will set out to understand the laws that govern their environment to better use it. They will, in other words, do science. And now the programmer has a problem, because it must keep close track of exactly what all these artificial brains are trying to probe.

The programmer could of course just simulate the whole universe (or multiverse?) but that again doesn’t work for the simulation argument. Problem is, in this case it would have to be possible to encode a whole universe in part of another universe, and parts of the simulation would attempt to run their own simulation, and so on. This has the effect of attempting to reproduce the laws on shorter and shorter distance scales. That, too, isn’t compatible with what we know about the laws of nature. Sorry.

:
[Maybe] down at the Planck scale we’d find a whole civilization that’s setting things up so our universe works the way it does.

I cried a few tears over this.

The idea that the universe is self-similar and repeats on small scales – so that elementary particles are built of universes which again contain atoms and so on – seems to hold a great appeal for many. It’s another one of these nice ideas that work badly. Nobody’s ever been able to write down a consistent theory that achieves this – consistent both internally and with our observations. The best attempt I know of are limit cycles in theory space but to my knowledge that too doesn’t really work.

Again, however, the details don’t matter all that much – just take my word for it: It’s not easy to find a consistent theory for universes within atoms. What matters is the stunning display of ignorance – for not to mention arrogance –, demonstrated by the belief that for physics at the Planck scale anything goes. Hey, maybe there’s civilizations down there. Let’s make a TED talk about it next. For someone who, like me, actually works on Planck scale physics, this is pretty painful.

To be fair, in the interview, Wolfram also explains that he doesn’t believe in the simulation hypothesis, in the sense that there’s no programmer and no superior intelligence laughing at our attempts to pin down evidence for their existence. I get the impression he just likes the idea that the universe is a computer. (Note added: As a commenter points out, he likes the idea that the universe can be described as a computer.)

In summary, it isn’t easy to develop theories that explain the universe as we see it. Our presently best theories are the standard model and general relativity, and whatever other explanation you have for our observations must first be able to reproduce these theories’ achievements. “The programmer did it” isn’t science. It’s not even pseudoscience. It’s just words.

All this talk about how we might be living in a computer simulation pisses me off not because I’m afraid people will actually believe it. No, I think most people are much smarter than many self-declared intellectuals like to admit. Most readers will instead correctly conclude that today’s intelligencia is full of shit. And I can’t even blame them for it.



Тема + Comments to "No, we probably don’t live in a..."нови [re: Mod vege]  
Автор Mod vegeМодератор (старо куче)
Публикувано13.04.17 02:36




131

:
naivetheorist said...
bee:

just a correction. Stephen Wolfram does NOT believe that the universe IS a computer (or a cellular automaton for that matter). if you read the beginning section of chapter 7 in his quitec literally weighty book "A New Kind of Science", you'll find that he takes the role of the theoretical physicist to be (in agreement with the view of von Neumann) constructing models that predict experimentally observed phenomena (so he is, like you a phenomenologist if i understand what you mean when you say you are a phenomenologist). he says in his lectures, blogs and book that "the universe (or any specific phenomena) behaves AS IF it is.... he does not claim that nature does in fact, follow the behavior of models which are simplified representations of aspects of reality (e.g. stephen might say that just because you can describe the flight path of a frisbee using differential equations does not mean that a person or a dog actually solves differential equations when playing frisbee catch).

best regards,

richard
9:25 AM, March 15, 2017
spink007 said...
The computer simulation hypothesis is what you get when you cross philosophy with journalism.
9:32 AM, March 15, 2017
SylviaFysica said...
Dear Sabine,

I think the simulation hypothesis belongs with other skeptical scenarios: yes, it is logically possible that the world could be very different from how it appears to us now or how we model it so far, but, no, there is no good reason to focus on this particular possibility. We should assign some probability to it, and then get on with science and critical thinking based on the higher probability cases. Eric Schwitzgebel has recently defended such a view under the name "1% skepticism". If we ever get evidence corroborating one of the - what are now considered to be - skeptical scenarios, then we should update our probabilities and develop theories for it, but prior to that it makes no sense: without evidence, there is nothing to work with.
One complication with this particular skeptical scenario is that Bostrom has offered an argument to the conclusion that we should assign high probability to the simulation hypothesis. Of course, there may be flaws in the argument, but even if you were to accept the argument, the fact that there is no direct evidence available seems a good reason for scientists not to spend too much time on it.
So, I definitely understand why all the attention to the simulation hypothesis annoys you, and the above points would be my way of responding to someone who brings it up.

Best wishes,
Sylvia
9:47 AM, March 15, 2017
Able Lawrence said...
An alternative way to refute the simulation hypothesis would be to estimate the computational requirements to run a simulation for the visible universe, even assuming qubits. What would be the memory requirement and can't we estimate the information content of the simulation vis a vis the simulated in terms of entropy. Perhaps that would show the meaninglessness of the simulation argument. It reminds me of Jorge Luis Borges short story on the country of cartographers who built a map as big as the country.
9:51 AM, March 15, 2017
AT said...
Hmm, somehow I am not entirely convinced by you arguments (or I did not read them carefully enough) that there is no simulation. If some really wants to pretend that there is quantum mechanics to (virtual) experimental physicists he/she/it will find a way. Just compute some time into the future and adjust past results as necessary to prevent suspicions (just think superdeterminism).

Of course, the question remains, why should one wish to degrade a virtual/real intelligence to a lab mouse? So, yes, overall I agree with you that the whole idea seems (like the multiverse) abandoning the theoretical physicist's job to describe nature.

Half off-topic, I wish to mention the Boltzmann Brain here, yet another mad idea about which I would like to read a blog post ;-)
10:07 AM, March 15, 2017
Uncle Al said...
Future of Humanity is a mumble factory trading apocalypse for self-importance. It is vacuum eager for gilded containers, the macroeconomics of political connivance. Future of Humanity demands credulity for shaking an aspergillum dispensing ubiquitous dread.

"today’s intelligencia is full of shit" and so willing to share, at gunpoint. Quis custodiet ipsos custodes? Entropy. Fear soft landings - the Third World.
10:18 AM, March 15, 2017
Sabine Hossenfelder said...
Hi Richard,

ok, thanks for pointing out. I should have been more careful phrasing that, will fix it.
10:32 AM, March 15, 2017
Sabine Hossenfelder said...
Able,

Well, Bostrom claims he's estimated it.
10:35 AM, March 15, 2017
Sabine Hossenfelder said...
AT,

Well, quite possibly you're not convinced there's no simulation because that wasn't what I was trying to argue. I was merely saying that the argument that it is likely we live in a simulation is wrong, or rather not even wrong - it's simply not an argument that lives up to the scientific standard. Maybe we live in a simulation - but I think it's unlikely for the reasons mentioned above: The difficulties in doing this seem to me vastly underestimated.

Yes, I've meant to write a rant on Boltzmann brains for some while, thanks for reminding me of this...

Best,

B.
10:40 AM, March 15, 2017
paperpandao1o said...
Very interesting, when it comes to deciphering junk from real science, you are my spirit animal, Sabine.
10:42 AM, March 15, 2017
Phillip Helbig said...
" It reminds me of Jorge Luis Borges short story on the country of cartographers who built a map as big as the country."

As far as I know, this scenario first appeared in Sylvie and Bruno Concluded by Lewis Carroll. Interestingly, this book also contains the idea of a train running in a tunnel which is a chord of the Earth, powered by gravity, with the journey always taking the same amount of time (about 42 minutes), regardless of the length of the chord.
10:43 AM, March 15, 2017
Phillip Helbig said...
"Yes, I've meant to write a rant on Boltzmann brains for some while, thanks for reminding me of this... "

Don't wait too long, otherwise one will materialize out of the void and write the post before you do!
10:44 AM, March 15, 2017
Georg said...
Hello Bee,
thank You for this Deep Thought(s) :=)
Georg
10:46 AM, March 15, 2017
aburt said...
Back when Bostrom first proposed this, I wrote a rebuttal, "Simulations and Reality in WYSIWYG Universes", exposing flaws in his math. (The paper was rejected by the journal he'd published his piece in, where he was, it appeared from the comments, the peer reviewer who rejected it. It was ultimately published in the SFWA Bulletin, in 2009, and is now available on Amazon/etc. as a little standalone thought piece ebook.)

Bottom line, by my analysis, the odds are much higher that the universe is exactly as we see it, not a simulation. (Though some other interesting conclusions follow from the math, about the nature of such universes, and the end game for intelligent inhabitants.) :) It was a fun analysis.
11:07 AM, March 15, 2017
Rob van Son (Not a physicist, just an amateur) said...
@Able
"An alternative way to refute the simulation hypothesis would be to estimate the computational requirements to run a simulation for the visible universe, even assuming qubits."

That has actually been done by Seth Lloyd
Computational capacity of the universe
https://arxiv.org/abs/quant-ph/0110141

I do not think he used qubits, but I am not sure. It is relatively easy to argue that, given the laws of nature, a simulation of the universe takes (much) more space than the universe itself. If you assume that the simulation runs according to other laws, anything goes.
11:11 AM, March 15, 2017
Matt Mahoney said...
Bostrom's simulation argument mistakenly uses probability theory to make statements about reality instead of statements about belief. If you say that the probability that we are living in a simulation is p, it means that you tested n universes and found pn of them to be simulations. In reality, no such test exists so it is not even a meaningful question.
11:33 AM, March 15, 2017
Kenneth Wharton said...
" there’s a trivial way in which the simulation hypothesis is correct: You could just interpret the presently accepted theories to mean that our universe computes the laws of nature. Then it’s tautologically true that we live in a computer simulation."

I wouldn't even go that far. Here's an argument against even this "trivial" point: https://arxiv.org/abs/1211.7081 . Whether or not you buy the argument, I hope you would at least concede that the statement "the universe computes itself like a computer" is not a tautology.
11:43 AM, March 15, 2017
Kenneth Wharton said...
This comment has been removed by the author.
11:49 AM, March 15, 2017
Matt Mahoney said...
Just because the universe is computable (Lloyd says 10^120 quantum operations) does not mean it is computable in a way that is useful to us. Wolpert proves that two computers cannot mutually simulate each other, which implies that a computer cannot simulate itself. A computer cannot have enough memory to know its own state because it needs at least one more bit to make any observation from its simulation and then has to include that bit in its state. Any computer that models the exact physics of our universe would have to exist outside our observable universe.
11:55 AM, March 15, 2017
Ben Jones said...
Suggesting the universe we're in can't be a simulation by appealing to computational complexity limits based on what we see in this universe is a bit of a circular argument. 'It seems very difficult to me' is like a computer game character having a hard time believing in computer game designers. Unknown unknowns.

I think the point of the hypothesis is that if it's possible to simulate experience (likely), and it's possible to nest simulations (also likely), it's a strange assumption to say you must be at the top of the tree. Nothing more controversial than that. Neither do I see any genuine showstopper in quantum mechanics. Maybe whatever box we are all running on really does simulate every single interaction! Or maybe it takes clever shortcuts, or fools us every time, or collapses wavefunctions on its own or whatever - we don't know anything about it. I personally find any of those ideas unlikely to the point of lunacy, but I don't have any ideological issue with it like you seem to, Sabine.
12:01 PM, March 15, 2017
Evan Thomas said...
"It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible"

I think Roger Penrose made pretty compelling arguments that it might very well be impossible. Everyone gets hung up on Gödel's Theorem here and seems to have missed the simpler arguments he made, which I thought were more powerful in some ways. To show that consciousness is not (fully) algorithmic in nature, we just need simple examples where we can show consciousness can do something an algorithm can't. Penrose lists some math problems that do not have algorithmic solutions (even in principle). Not only have we solved these problems, for some we've even proved they cannot be solved by an algorithm. The ones he talked about that come to mind are the halting problem, tiling problem and general Diophantine equations, although there were others, I believe.

Sounds really hard to simulate a part of the universe that is non-algorithmic doing something non-algorithmic like proving a problem is non-algorithmic, by using an ... algorithm.

Although, I seem to recall Penrose leaving open room for non-computational "algorithms", while admitting he has no idea what this would be like!
12:08 PM, March 15, 2017
Paul Mayo said...
This comment has been removed by the author.
12:15 PM, March 15, 2017
Evan Thomas said...
Matt Mahoney, this was an interesting statement:

"A computer cannot have enough memory to know its own state because it needs at least one more bit to make any observation from its simulation and then has to include that bit in its state."

Sounds like an possible argument against a computer becoming self-aware, at least fully?

Then again, I'm not sure most humans are even close to fully self-aware!

Anyhow, thanks for sharing, will have to look into this.
12:17 PM, March 15, 2017
Serafino Cerulli-Irelli said...
Something more about Borges. “Nosotros (la indivisa divinidad que opera en nosotros) hemos soñado el mundo. Lo hemos soñado resistente, misterioso, visible, ubicuo con el espacio y firme en el tiempo; pero hemos consentido en su arquitectura tenues y eternos intersticios de sinrazón para saber que es falso." J.L.Borges, 'Avatares de la Tortuga'

"We (the undivided divinity that operates in us) have dreamed the world. We have dreamed it gnarly, mysterious, observable, continuous in space and reliable in time; but we have allowed into its architecture anomalous gaps of irrationality both indeterminate and timeless so we might know that it is a contrivance."

12:42 PM, March 15, 2017
L said...
Hi Bee, what's your opinion of https://arxiv.org/abs/1210.1847 ?
1:18 PM, March 15, 2017
עמיר ליבנה בר-און said...
It seems to me that there's another solution for the "accuracy of simulation" problem nobody talks about, a solution with amusing consequences.

The beings-that-are-not-called-gods simulate each mind individually, and turn people off once they study too much physics and strain the simulation resources. The simulation of society includes physicists of course, but the simulated minds only enjoy popular articles and the fruits of technology. (The simulators might need to exclude psychologists and sociologists, and some of the more observant philosophers. But still, the simulated beings would far outnumber the physical ones.)

Obviously, there's a very different distribution of professions on simulated beings than in our society, since the simulated physicists are killed early. But since the society is simulated, there is no need for it to match the distribution of brains.

The degree of belief in this simulation hypothesis would thus depend on your occupation. If you are a physicist, the argument above would imply you must be a real live mind, living its life in the physical universe. But if you are only a reader of physics blogs, you may as well be simulated - simulating just your mind likely doesn't even require any quantum effect. This effect is stronger the less you interact with society, since you require less simulation resources this way (as society is made up, just for you). So in this scenario we can except the less socially-fluent non-physicists to believe the most that they are living in a simulation.
2:05 PM, March 15, 2017
Louis Tagliaferro said...
Sabine said…

“The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. …the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.”

I wanted to ask that you consider discussing the issue above (discretization and special relativity) in more depth for a future blog post? I appreciate how you often can explain the technical in a way that gives non-physicists like me a better understanding of the science and issues involved.
2:06 PM, March 15, 2017
Uncle Al said...
The simulation is analog not digital. It does not calculate. Its size is not strongly bound to be larger than its construct's size. The universe is its own simulation! There's little employment to be had here.

What simulates the simulation? God (retrograde intellectual diversity). Give generously to your local tax-exempt sales outlets.
3:11 PM, March 15, 2017
Kaleberg said...
Penrose is a mystic. I always have trouble taking him seriously.

I've always liked Bill Gosper's approach in HAKMEM from 1972. He considers the various ways that a computer program can tell the number system of the machine it is running on. Back then, not all computers were twos-complement. There were ones-complement and a variety of decimal machines. He then raised the simulation issue:

"By this strategy, consider the universe, or, more precisely, algebra: Let X = the sum of many powers of 2 = ...111111 (base 2). Now add X to itself: X + X = ...111110. Thus, 2X = X - 1, so X = -1. Therefore algebra is run on a machine (the universe) that is two's-complement."

To capture the spirit of the times: "If arithmetic overflow is a fatal error, some fascist pig with a read-only mind is trying to enforce machine independence." Ah, the 1970s.
4:12 PM, March 15, 2017
Dev Null said...
I don't disagree with the thrust of your argument, but one of your statements strikes me as unlikely, specifically "More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses". This really reminds me of the early days of computing where many thought there'd only be a need for a handful of computers or that "640k ought to be enough for anybody."
5:24 PM, March 15, 2017
Sabine Hossenfelder said...
Dev,

Well, please allow me a little fun, I sometimes like to imagine what the future might look like. It's not that I think it'll stay that way forever. But keep in mind that in the early days indeed there were only a handful of computers. And they were huge. And not everyone got to book time on them. So it seems likely to me we'll pass through a similar state with quantum computers.
2:28 AM, March 16, 2017
Unknown said...
I see no reason to assume the "simulators" would go to extraordinary lengths to hide the truth of the simulation from us. So what if we discover it? Perhaps that's the point? Maybe that's when we "win the game"?

Who knows? Maybe it's just experimental cosmology.

What I find compelling about the hypothesis is the Bayesian argument of 'If we could simulate a universe, then so could someone else, and therefore it's more likely than not we're already in one (and maybe they are too!).' Also, if there turns out to be no TOE, then a simulation run on piecewise, discrete functions seems a perfectly compelling hypothesis. There being of course, no evidence to suggest a "theory of everything" exists, and much evidence to suggest it doesn't.

PS It's trivially easy to hide "the truth" from scientists if you're God, just throw the tiniest bit of noise into something and subtle effects are swept right under the rug. Good luck with your 5-sigma if God's against you.
2:54 AM, March 16, 2017
Serafino Cerulli-Irelli said...
Sometimes we read that QM is like an 'operating system' (OS). I tend to agree with that. A question then arises. If QM is a sort of OS (superpositions, quantum darwinism, quantum non-separability, quantum contextuality, all those famous quantum principles, etc., are then routines or subroutines of that OS?) can we say that QM can ***not*** be, itself, a 'simulation'? It seems so, to me. (I remember that Einstein asked: Is the Old One [or the Great Simulator?] ***free*** to choose the physical laws?)
5:16 AM, March 16, 2017
akidbelle said...
Hi Sabine,

thanks for the post; I guess the laws of nature enabling a computer to simulate our universe would be very different.

Why would this law be more natural than the ones we believe we know? If not, no interest.

J.
7:02 AM, March 16, 2017
nicolas poupart said...
Serafino,

The OS metaphor is doubly wrong. First, QM is Turing complete and it's compares much more to a programming language than to an OS. Second, an OS suggests that there is something more fundamental and necessary to the QM existence and this proposition is false. The Turing machine demonstrates that computability emerges with a minimum of components, the QM does not need to be simulated, its simple existence is enough to generate the whole computability.
9:11 AM, March 16, 2017
mlmcl said...
The simulation notion conceals professional envy. As a life-long programmer, I relate, even if I don't sympathize. Compared to today's virtual worlds we find the universe doesn't hang, doesn't crash, doesn't run out of resources, and doesn't need maintenance. There are no race conditions, no corrupt variables, no unintended side-effects. But this is like comparing a photo of a tree to a tree, it's not an analogy that flows in the other direction.

The term "artificial intelligence" should be abandoned. When we create intelligence there will be nothing artificial about it. And there is no special reason to believe that we'll properly recognize it or fully understand it. Consider our failure to communicate with other intellectually advanced species, like whales and elephants. We do it on our terms, not theirs, despite decades of study.
9:26 AM, March 16, 2017
JimV said...
I enjoyed the post, and don't like the simulation hypothesis either, but (like another commenter above, but independently) would very much like to hear more about this: "Indeed, there are good reasons to believe it’s not possible. The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity."

I had concluded (again as several commenters above) that any such simulation would have to be done in some higher-level universe with higher computational capacities, which takes it out of the realm of our science - however, I will say this: it is the first instance of a god-hypothesis (god as the entities in control of the simulation) which makes any sense to me. For example, a miracle (something not possible for the simulation's code to produce) could be done by the equivalent of hex-editing the data.

As for Penrose's argument, mentioned above, here's an algorithm he seems not to have considered:

1) Try something, even randomly, such as Edison's search for a practical light-bulb filament (not the first feasible thing, but the best thing he ever found in years of searching, was bamboo fiber).
2) If it doesn't work, try something else.
3) If it does work, write it down so future generations will know about it and go on to improve it.

That's how blind nature produced Penrose. It is also similar to the way Dr. Bee found some equation-solutions mentioned in her previous post.
10:06 AM, March 16, 2017
Richard Burke-Ward said...
Sabine, four things:

1. Physics is a mathematical that hopes to predict how the universe will behave on every level. In other words, it *encodes* the universe into algorithms. Our brains also encode what we perceive (differently). Language does the same (differently). We don't perceive raw reality, we perceive multiple, divergent encodings.

One of these coding systems is maths. But we don't know how the universe *is*, we only know how we perceive it. We codify inputs into things that mean something to us.

I am not a solipsist, mainly because I am not smart enough to understand how others codify reality. I can't experience how a physicist sees the curls and spirals that denote the decay of a charm quark; I can't experience the sensorium of a frog. But I can see these things happen. Therefore I am not alone.

I can only code (and therefore understand) things that are within the boundaries of my own physical, experiential limits. (These limits grow with time and experience.) The fact that I can see that there is a larger 'super-set' of coding systems - and that there are possibly *super*-super-sets beyond - is proof, to me, that there is no way to directly apprehend reality.

We can model reality's behaviour, somewhat, but we can't *experience* it in the raw.

So, we absolutely *do* live in a simulated universe. The question is, is it just a product of our moment-by-moment encoding, or is there some larger principle at work?

2. Sabine, you are quite right to say that it's mathematically impossible for a small fraction of the universe to be capable of perfectly simulating the entire universe it is part of - or, by extension, for that simulation to also contain another perfect simulation... But, like it or not, and however fuzzy the quantum edges might be, we do live in a granular universe. Our universe is not infinitely divisible.

So, what if the 'simulation' that we live in is actually more granular than the universe that created the simulation? What if any simulation *we* make is more granular than our own universe?

The paradox about universes-within-universes disappears. I haven't done any maths on this; but I assume you'd end up with a finite integral, even though there is an asymptotic infinity.

3. You make assumptions about *why* some entity would run a simulation: that it is in order to receive some expected or unexpected output - a 'result'. This is anthropomorphic, and may not be the case. I personally can't imagine any other reason; but then I am human, and my thinking is anthropomorphic by definition. It doesn't mean such reasons don't exist.

Maybe some meta-entity wants to know just how granular a simulation can become before it ceases to have meaning (on its layer of reality).

Maybe Boltzmann brains are real (in some finer-grained universe), and the 'brain' happened to emerge with a coding system that simulates the universe we experience. (If it didn't, we wouldn't be here.)

4. Why assume that the physics of our universe are the same physics as the universe running the simulation? Or even that the simulator-universe si part of our larger meta-verse? It could be something else entirely.


One other observation, from Iain M Banks's novel "Matter". The novel is an exploration of what a universal simulation might look like, and how we can be sure that what we experience is real. One character points out that the best argument *against* our universe being simulated is that we suffer. We experience (and inflict) pain, mortality, horror, grief.

What conceivable programmer could be so cruel?

Then again, we are talking by definition about things we cannot conceive.

Thanks Sabine, as always,

Richard BW
11:25 AM, March 16, 2017
Old Man said...
Bee
Your truth is your perception of reality interpreted by your human experience.
In a nutshell this means that every human brain is wired differently. This would require a simulation for not only every human on planet earth, but also every mammal and possibly every living thing.
david z
12:25 PM, March 16, 2017
Son Tran said...
Hi Sabine,

You are looking at it from very limited perspective. Had you ever thought of reading about Rene Descartes? Everything comes down to energy. Even a thought and idea is made of energy. And energy can be neither created nor destroyed; rather; it transforms from one form to another.
1:26 PM, March 16, 2017
APDunbrack said...
I think one should separate out two kinds of simulation arguments: anthropocentric and non-anthropocentric. The first sort, I think, is, while not entirely impossible, certainly implausible for a number of reasons you point out. Unfortunately, people tend to make their arguments for what we should do on this basis - "humans should be interesting" and so on. The second sort, I'll come back to.

There are also two ways to take simulation arguments: as science and as philosophy. The former is concerned with predictions from the hypothesis; the latter is concerned with arguments for or against the hypothesis as "true" (metaphysically). Many scientists, when discussing the simulation hypothesis, are doing so not as scientists, but as (typically naive) philosophers, dealing vaguely with ideas rather than actual concrete predictions (although there are a few who make predictions, whom you point out and whom I was previously unaware of).

That said: let's take a non-anthropogenic simulation hypothesis. I don't think this is entirely unreasonable; it just takes simulating a sufficiently complex universe. However, since we have no idea what people who simulate us are simulating (or what technology they're using to do it), that seems to tell us nothing new scientifically. Insofar as the nested-simulation-argument is valid, the simulation hypothesis makes no predictions.

My own issue with the argument in principle: you can then ask "what counts as a simulation." Sure, an actual computer simulation does - but what about a by-hand calculation of the same thing? What about mathematically solving the differential equations for the same phenomena, providing the same solution? What about just thinking about the solution? If a teacher teaches his students how to solve this differential equation, does that spawn a bunch of new universes? I think if you take this to its logical conclusion (combined with something like mathematical Platonism - things turn out differently if you think the math has to actually be discovered to exist), you find that universes can only be the laws themselves in some abstract sense, and you more-or-less end up being Max Tegmark. Not that there's anything wrong with that, but again, I think that ends up being unhelpful and unpredictive metaphysics rather than anything scientific.
4:26 PM, March 16, 2017
Count Iblis said...
We do live in our own brain's simulation of a virtual world based on real world information. What we experience is not the real world but the virtual world, albeit event in the two worlds are highly correlated (unless you suffer from schizophrenia). You can do simple experiments to verify this, e.g. optical illusions like this one

https://twistedsifter.files.wordpress.com/2012/05/optical-illusion-same-gray-color-thumb-over-middle.jpg?w=800&h=600

if you obscure the boundary between the two squares with a finger, then both squares become equally dark. Or take the McGurk Effect:

https://www.youtube.com/watch?v=G-lN8vWm3m0
8:08 PM, March 16, 2017
Kaleberg said...
It helps also if you remember that simulating a brain doesn't mean simulating something that always gets right answers. Most conscious animals we know use a biologically implemented Bayesian scheme that only indirectly requires any QM effects. It's a pretty good scheme, but it isn't always right and isn't particularly "quantum".
9:18 PM, March 16, 2017
Dana said...
From what I understand the simulation hypothesis is supported by mathematics but is not science. It's not falsifiable from my understanding, it doesn't make any predictions which can yield a true or false discovery, it does not reveal anything new about the universe. It does imply a multiverse in my opinion.

The simulation hypothesis suffers from the same problem that the problem of other minds suffers from. It cannot be tested, it's not science, but is a thought experiment. You can create a proof or come up with probabilities but if it doesn't produce something testable by scientists, physicists in particular, then what good is it?

So we live in a simulation? So what?
10:23 PM, March 16, 2017
David Schroeder said...
While evidently not viable as a scientific theory, the simulation hypothesis would make a nice sci-fi background plot for a Star Trek episode.
5:59 AM, March 17, 2017
Uncle Al said...
@mlmcl... "Consider our failure to communicate with other intellectually advanced species, like whales and elephants. We do it on our terms, not theirs."

If cetaceans broadcast (encoded) sonar images, we are fools for assuming our linear versus their volumetric language. Perhaps lines of physical theory should be 2D+ε holographic projections (photographic thick emulsion versus binary optics embossed film) re Contact (1997) and the primer. Holograms do not calculate. Lenses do not calculate, but they lose phase information.
10:23 AM, March 17, 2017
Serafino Cerulli-Irelli said...
Nicolas [9:11 AM, March 16, 2017]. I can agree. QM compares more to a programming language (or a syntax or, maybe, a many-valued language like the Aymara language) than to an operating system. I also think that QM does not need to be simulated. I'm in trouble when I think about the very meaning of computation, in the QM context. What is computed? Information? What is information? How is this information Turing computable? Via wavefunctions? Another interesting subject is, of course, the Turing-Church-Deutsch principle, and related consequences. S.
11:44 AM, March 17, 2017
David Brown said...
"The idea that our universe is discretized clashes with observations because it runs into conflct with special relativity." If nature is a multiverse that needs to be explained in terms of a Fredkin-Wolfram network underlying the Planck scale, then all arguments involving energy, spacetime, and/or quantum information might be wrong unless justified by some approximation that uses Fredkin-Wolfram information. Google "einstein's field equations: 3 criticisms" for my viewpoint.
1:13 PM, March 17, 2017
Jonathan Miller said...
I never could tell the difference between 'the universe is a computer simulation' and 'there exists a god, and perhaps that god is even similar to one in one of our religions'. As a scientist I don't think it is an interesting hypothesis and as a practicing religionist, after a little thought, I concluded that it also wasn't relevant to my relationship with God. That doesn't mean that the hypothesis doesn't imply anything interesting to theologians or philosophers, for example it might imply that god (or that which is doing the simulation, to use less loaded language) wasn't perfectly omniscient/omnipotent.
3:34 PM, March 17, 2017
Unknown said...
The whole notion of creating an "artificial consciousness" in a simulation, including encoding a "sense of agency" and self-awareness, seems outlandish.

Many very smart physical scientists & engineers, who often have a reductionist world-view, seem to significantly misunderstand & underestimate the problem. Ray Kurzweil's "singularity" mumbo-jumbo also falls into that trap.

Consider the model organism, the nematode c.elegans. Every worm has an essentially identical "brain" of about 300 neurons. I believe its neural connections -- its "connectome" -- have been completely mapped.

Yet, how the connectome leads to the worm's most basic behaviours is poorly understood, eg see:
http://www.cell.com/current-biology/abstract/S0960-9822%2805%2900940-1?_returnURL=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0960982205009401%3Fshowall%3Dtrue

IMO a more likely scenario for "artificial consciousness" is creating a "chinese room" type of simulation. It might very well pass any Turing test, but it's still just an unconscious "chinese room".

-- TomH
4:37 PM, March 17, 2017
Steve said...
> If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. This might be somebody’s universe, maybe, but not ours. You either have to overthrow quantum mechanics (good luck), or you have to use qubits.

Actually, if you have exponential classical resources available, there's no problem. This was proved in one of the first papers on quantum computation: http://epubs.siam.org/doi/abs/10.1137/S0097539796300921

>But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem?

This also seems like a non-problem. If it's a simulation, it could simply be restored from an earlier state as necessary, like save-scumming.

>Besides, if the reason you build an artificial intelligences is consultation, making them believe reality is not what it seems is about the last thing you’d want.

However, I'm totally in agreement that the motivations of the programmers and hardware-owners are so bizarre and inexplicable as to throw the whole enterprise into serious doubt. Of what possible value is our universe to anyone who doesn't live here?
9:07 PM, March 17, 2017
Sabine Hossenfelder said...
Steve,

Yes, first statement was very sloppy, sorry :/ Quantum mechanics should be possible as long as you have some kind of non-locality (in a suitable 'space-time'), it's quantum field theory that nobody knows how to do from classical bits. Hence my remark 'good luck': it might not be impossible, but nobody knows how to do it, and it will take some work to convince anyone it can be done. (The issue of locality is a thorny one. I am not at all sure it's even possible to replace the actual 'space-time' with a made-up space-time since locality is basically what defines the thing which we call 'space-time'.)

Second, I don't see how this helps - you'll still have to do the calculation which is what you were trying to avoid in the first place.

Either way, the message I was trying to get across is that if one wants to claim we live in a computer simulation one has to show it's possible to reproduce our observations this way. That's not easy, and vague words won't do.
3:27 AM, March 18, 2017
Henning Dekant said...
Whenever somebody espouses the simulation hypothesis to me I have to remind myself to be tolerant of religion. It is after all no more or less absurd than the idea that some guy thousands of years ago was nailed to a piece of wood to wash away our sins (whatever that is supposed to mean).
6:21 AM, March 18, 2017
Rob van Son (Not a physicist, just an amateur) said...
@Henning Dekant
" It is after all no more or less absurd than the idea that some guy thousands of years ago was nailed to a piece of wood to wash away our sins "

Human sacrifice was quite common throughout human history. Nothing singular about this story I think. Just that this particular way of torture was a common way to execute criminals.
9:26 AM, March 18, 2017
gurugeorge said...
Two random comments:-

1) For me the simulation idea falls down with "combinatorial explosion", which Daniel Dennett talked about in Consciousness Explained when discussing the Brain-in-a-Vat idea. Something like what you said about the choices an intelligent mind is going to make - a simulation would have to "anticipate" all the possible choices, and the possible results of those choices, etc., etc., ad infinitum.

2) Re. AI, I think that even with the great strides machine learning has been taking recently, it's not going to amount to actual intelligence: that will require different AIs talking to each other and making their own world together. IOW, intelligence is partly a function of sociality, most of the most intelligent creates are intensely social animals (with the odd exception of the octopus).
11:56 AM, March 18, 2017
Count Iblis said...
It's not clear to me whether the result in the paper cited by Steve:

http://epubs.siam.org/doi/abs/10.1137/S0097539796300921

applies here. Suppose we just do a brute force simulation of QFT, just put everything on a lattice and compute the time evolution of the wave-functional. Then the classical simulation will yield a MWI-like evolution, you'll end up with different sectors that have a different classical history that you can assign probabilities to using Born's rule. But we're interested in the inside view, and at that level Born's rule doesn't apply because all sectors end up being simulated, an observer has equal probability to find him/herself in any of the sectors where he/she is present regardless of the complex numbers referring to amplitudes that are supposed to yield the probabilities for the sectors.
2:14 AM, March 19, 2017
Sabine Hossenfelder said...
Count,

What's the lattice spacing?
2:55 AM, March 19, 2017
Count Iblis said...
You want our familiar length scales like the size of atoms to be much larger than the lattice spacing. One should therefore tune the bare couplings of the model defined on the lattice such that our familiar physics appears at scales much larger than the lattice spacing. The lattice artifacts lead to irrelevant operators, so they then become invisible at the scales we can probe. This can be done by using a model that has a critical fixed point and choosing the bare parameters such that the RG flow will let you hover near that fixed point for a large number of RG steps before you veer off toward the Standard Model.
5:13 AM, March 19, 2017
Sabine Hossenfelder said...
Count,

As you have noted yourself, you can do it up to a limited precision, highlighting the problem I pointed out: How does the programmer in advance what precision is about to be tested and if the precision limit is reached, how does he-or-she know what data to fill in?
8:20 AM, March 19, 2017
Count Iblis said...
Sabine,

Of course, I don't believe it's true, but I guess that a simulation on a lattice would not allow field configurations to arise that correspond an observer breaching the limits of the simulation.

Another problem is that the simulation actually doesn't seem to matter. Suppose that you simulate the early universe just after the Big Bang for a few seconds and then you stop. We then don't seem to appear in that simulation. However, the universe as it exists today is obtained by applying the time evolution operator to the early universe. Therefore, that simulation of the early universe does contain us in a scrambled way. This suggest to me that the simulation wasn't necessary at all, leading me to Tegmark's mathematical multiverse.
4:34 PM, March 19, 2017
George Rush said...
Sabine H.>> I think most people are much smarter than many self-declared intellectuals like to admit. Most readers will instead correctly conclude that today’s intelligentsia is full of shit. - I agree 100% with this part, but not the rest. Thanks for your interesting post!
12:13 AM, March 20, 2017
zarzuelazen 27 said...
Hi Sabine, I feel you haven't fully grasped the implications of the theory that 'reality is information' (or reality is computation).

It's *not* saying that 'reality is a simulation'. No, it's saying something much more radical than that - it's saying that there is literally no difference between 'the real world' and 'a simulation'. It's saying that the whole notion of a 'real world' is meaningless - it's like the aesther and can be dispensed it. The idea is that there's no hardware at all, no 'base level' exists - it's all 'software'.

This can work by allowing the quantifier 'real' to be continuous instead of binary. Instead of 'real' as being a binary yes/no, we could think of 'real' as a continuous, like the brightness of a light bulb. Then the idea is that there are only *degrees* of reality, things are more or less real, but no 'base level' of reality is needed.

Some questions for you: Is a simulated hurricane a 'real' hurricane. Answer: To some degree yes. Is a simulated you, really you? Answer, again, to some degree yes, up to and including 100%.

Is the 'simulation' of a thing actually drawing that thing into existence, by making it more 'real'?

And now here's the really huge kicker for you, the 'astonishing thought': Is the creation of the universe really completed yet, or is it still going on?

Think: what happens when one part of the universe simulates another part? (Remember: 'real' can be a matter of degree, and the simulation of a thing can literally summon that thing).
12:38 AM, March 20, 2017
Sabine Hossenfelder said...
zarzuelazen,

I think you haven't fully grasped what this blogpost is about.
3:11 AM, March 20, 2017
qsa said...
Actually this issue I think is misunderstood. We do not need to simulate the whole universe to prove it. We only need the correct form of the theory of nature to calculate, let's say, interactions of 100 atoms to the experimental precision(all aspects). It is clear that present day theories are lacking, but it is very clear that today's theories are also very close. So obviously we are almost there to prove the simulation hypothesis, at least that it is possible(maybe by an advanced civilization that has solved the problems of the human ones).

8:12 AM, March 20, 2017
John Anderson said...
To paraphrase G. Orwell, "They must be real intellectuals, no ordinary person would believe such nonsense."

Quis simuladiet ipsos simulades?

Did J. A. Wheeler know bit from Shinola?

So many other interesting problems: dark matter, matter-antimatter asymmetry, measurement problem, dark energy, neutrino oscillations, etc.
9:09 AM, March 20, 2017
akidbelle said...
zarzuelazen,

in any case HW or SW or both, what does that change? That you and I can refer reality to some macroscopic category inherited from our immediate environment. Why would there be even the beginning of a similarity between that dichotomy and reality?

If only information and no substrate, then information is active and interacts: I do not name this information.

J.
12:17 PM, March 20, 2017
Sabine Hossenfelder said...
qsa,

if you think we're close to simulating the presently accepted fundamental laws of nature, I recommend you read my post again
12:28 PM, March 20, 2017
qsa said...
Sabine,

I have read the article(many of the same arguments elsewhere) and many of the comments. It would help if your objections are spelled out more explicitly. Thanks.
1:54 PM, March 20, 2017
nicolas poupart said...
The problem is a curriculum problem, computer scientists have to go to quantum computing courses (at least as an option) and physicists have to take courses in the theory of computation (at least as an option). It is indispensable that the notions of computable numbers, formal incompleteness, automatons, complexity of the calculation be well grasped by physicists. This to avoid reinventing the wheel and to marvel at what is obvious to the other.
2:56 PM, March 20, 2017
David Pearce said...
A nice post: thank you. One small correction: I've asked Nick several times over the years what credence he personally assigns to our living in an ancestor-simulation. He's never reported a figure higher than 20%. That's rather different from believing that if posthuman superintelligence runs ancestor-simulations, then the principle of mediocrity dictates we're probably one of them – which is Nick's view.
3:29 PM, March 20, 2017
margarita mc said...
Mmm. I've only recently started to read your (most enjoyable) blog as part of my self education in science, and so have never heard of the simulation hypothesis before today.

You wrapped up your piece with:

"Most readers will instead correctly conclude that today’s intelligencia is full of shit. And I can’t even blame them for it."

And I was relieved that you had written that, Sabine, because as I was getting to the end of the blog post the thought that was pounding in my head was, "Do people actually get PAID for thinking up this **##* stuff?!"

I'm far too well mannered to have written a comment saying what you yourself said - but it was marvellous to have it said for me!
5:49 PM, March 20, 2017
Uncle Al said...
@qsa, "interactions of 100 atoms to the experimental precision"

Orthorhombic sulfur: space group Fddd (#70); a = 10.4646 Å, b = 12.8660 Å, c = 24.4860 Å; α,β,γ = 90°, has 128 atoms in its unit cell (DOI: 10.1107/S0108270187088152). You might need more than 100 atoms given that sphere close packing is 74% occupancy and sulfur manages only 17%.
7:52 PM, March 20, 2017
Tom Aaron said...
Im a geologist.

On all of these issues I use a geologic timescale. We are still motes on a dot. In a thousanf years? A million? A billion?

We dont know what consciousness is. We don't yet have full AI. We still haven't had any interaction with some alien intelligence. The bottom line is we are primitive creatures. Proposing some type of simulation is just another level that our brains can 'sort of' get around...the same with multiverses, parallel universes, etc. There is ZERO evidence for it yet its tossed out as a possibility. Its akin to
Creationism...no less legitimate yet no more absurd. Nothing in the physical properties of the Universe suggest a god or a simulated existence. It needs not be refuted as there is nothing to refute.

The next hundred years are going to offer some exponentisl advancements in technology and our understanding of existence. We may discover that 'reality' is more mind blowing than the possibilities we toss out today like 'simulation'. Today we our frustrated by our physics and impatient but the legitimate answesr will still be evidence based.
7:54 PM, March 20, 2017
Sabine Hossenfelder said...
qsa,

I explained in my post that your argument 'we do not need to simulate the whole universe' is an unproved and non-trivial assertion, so is your belief that we can reproduce the presently accepted fundamental laws of nature (the standard model and general relativity) on computers. We can't. We can only approximate them. I don't know why you expect me to repeat this and I'm not interested in repeating it yet again.
3:39 AM, March 21, 2017
Scott said...
The notion that we have solved problems that can't be solved by algorithms seems suspect. What counts as a "solution?" As long as it can be described and verified in finite time, it can be found (after a very long time) by a simple trial and error algorithm. And if it can't be described and verified in finite time, then how can we claim to have solved it?

Proving a problem unsolvable isn't the same thing as solving it! The so-called halting problem is a great example. The name is misused; is the HP finding a general program for determining halting behavior, or is the HP determining whether such a program can be found? The latter is solvable and solved; the answer is no, which is why the former is not and cannot be solved. Claims that the HP is an unsolvable solved problem trade on confusion between these two definitions.

That being said, Penrose is a pretty brilliant mathematician, and I think this is evidence that these things are hard rather than that some people are stupid or malicious for believing false or confused statements.

This is actually why I take Bostrom seriously! I don't necessarily think his argument is right, but then I'm not sure HE does. The point of pushing an argument like this is to see where and how it fails. I'd argue that Sabine's criticism here only applies if the argument can't fail — if it can't, then it's "not even wrong" and so frivolous. But I think it can fail, and in interesting ways that can teach us something. The fact that it can't be rejected based on physical evidence doesn't mean it can't be rejected, nor that such a rejection would invalidate the exercise. This is how philosophy (and math, I should add!) makes progress. It's just as hard and just as important as physics, though I don't necessarily expect physicists to agree!

My favorite analogy is with go (or chess if you prefer). There are some moves that you don't ever see in a pro game because they end badly. But knowing that they end badly requires playing them out! That's what I think Bostrom is doing; or at least that's why I think what he's doing is valuable, even if it turns out he's a confused true believer.

I'd be happier if this post engaged with the actual probabilistic argument he makes. It's not trivial, it can be justified by certain assumptions, and those assumptions are accepted in some other contexts. The question is, why should we reject them in this context? I feel that Sabine has given only half an answer here, and has not done anything to link that answer, in a careful way, to Bostrom's actual claims. As someone who takes the call to quantify humanistic and philosophical arguments very seriously, I'm slightly bothered by this unwillingness to meet half-way.
3:41 AM, March 21, 2017
Sabine Hossenfelder said...
David,

I've read your comment 5 times, but failed to make sense of it, sorry. He believes we live in a simulation with 20% probability or he believes we probably live in one of them (assuming that 'probably' means 'with almost 100%')? Your comment seems to say both.
3:47 AM, March 21, 2017
Sabine Hossenfelder said...
Scott,

I think you misunderstood the message I tried to get across. It's not that I think it's uninteresting, or that nothing can be learned from it, or that one shouldn't try to estimate the probabilities. I'm saying if you want to do that, you have to take into account the physical laws that we have confirmed to high precision. That's a non-trivial requirement, and without demonstrating that you can actually reproduce all our observations the simulation hypotheses is merely fiction. Best,

B.
3:51 AM, March 21, 2017
Cairo Silver said...
What's wrong with treating the universe as a computer?

I mean, if you argued against it by saying 'So where is that computer? Inside another universe? Is that a computer? Computers all the way down?', that's a valid argument against it.

But otherwise if a universe works by fixed rules and does not deviate from them, what's wrong with calling it a computer? Is the problem that 'computer' seems too intentful a word?
5:12 AM, March 21, 2017
David Pearce said...
Sabine, apologies for the ambiguity. I just checked the Simulation Argument FAQ (last updated 2011). Nick explicitly states that he assigns a probability to the simulation hypothesis of something in the 20% region. This is quite consistent with believing that _if_ posthuman superintelligence runs full-blown ancestor simulations, then we probably inhabit one of them. Maybe such simulations will prove technically infeasible; maybe humans will shortly go extinct; maybe any posthuman superintelligence will find ancestor-simulations too unethical or uninteresting to run. Maybe (and this would be one of my doubts, not Nick's) phenomenally bound subjects of experience can't arise at different levels of computational abstraction. Either way, Nick is much more cautious than some of his popularisers. Compare Elon Musk's recent claim that the probability we live in basement reality is “one in billions”.
6:41 AM, March 21, 2017
Sabine Hossenfelder said...
David,

Ok, I see. My biggest issue with Nick isn't the number but that I think it would be wiser to not mix up the simulation hypothesis with existential risks that actually have a sound footing in science.
11:16 AM, March 21, 2017
Sabine Hossenfelder said...
Cario,

Why don't you read what I explained in my post: It isn't compatible with what we know about the laws of nature. Unless you can prove that it can be made compatible, it's merely a mildly interesting tale.
11:17 AM, March 21, 2017
Tom Aaron said...
Cairo...is the Universe a computer?

Maybe yes. maybe no. But evidence is needed. We have none.

We don't know what existence is so we don't know if it follows some rational process. Our senses perceive fragments of reality that our minds are capable of detecting. Our math and physics are akin to watching a baseball game. Everything seems rational to the fans watching. Patterns are just accepted and some theory might be developed about how the number of strikes relate to the number of outs, etc. However, when we stand back and ask someone who has never seen a baseball game if the patterns represent reality he would go 'huh?...it has nothing to do wih gathering coconuts'.

The computer model, simulation, etc. 'assumes' a greater grasp on existence than what we have. Its akin to the ancients assuming that some god is responsible for the annual seasons. We are are still clueless to the extent of knowledge that we dont know. There may be a simulated existence, there may be a Sun god, the Universe may be a cimputer but 'so what?'. No evidence for any of these. Zero. We have finite knowledge and, like baseball, it may be irrelevent in understanding the bigger picture.
11:52 AM, March 21, 2017
George Rush said...
Sabine, obviously we might "live in a simulation". OTOH obviously you know what you're talking about. So, there's an apparent contradiction.

Minimally, a virtual reality simulation has to reproduce my experiences - or, from your point of view, yours. Most people say their consciousness can't be simulated, but not you. So, surely you agree that you personally could be a "brain in a vat"? Unlikely, but doesn't violate any physical laws, right? Moving beyond solipsism, if it can do one human, it can do 7 billion. A few orders of magnitude extra sim power is no big deal. Now, let's consider some of your objections.

From your previous post on the subject, "To avoid the inconsistencies, you’ll have to carry on all results for all future measurements that humans could possibly make, the problem being you don’t know which measurements they will make because you haven’t yet done the simulation." I guess you're talking about (to take simple example) a classic spacelike-separated Alice & Bob EPRB gedanken? The sim can easily provide random spin up/down results for A and B. But later, when results are compared, they must be consistent, with the right correlations. If that's what you mean, I claim it's no problem, happy to explain why.

Not sure why you think sim has to look at "all future measurements" Alice and Bob might make. But the objection is easily disposed of. As far as we know there may be only one future path. The entire history of the universe may be pre-known: no calculation necessary, sim just looks it up in a large database.

Then you say: "But there is a better way to test whether we live in a simulation: Build simulations ourselves, ... Eventually, the ... lowest level will find some strange artifacts. Something that is clearly not compatible with the laws of nature they have found so far and believed to be correct." This is no argument against sim hypothesis. Leaving aside some (rather important) issues, there's no reason this scenario couldn't happen. It may seem "crazy", but that's no argument.

In this current post, you say "... there’s a trivial way in which the simulation hypothesis is correct: You could just interpret the presently accepted theories to mean that our universe computes the laws of nature. Then it’s tautologically true that we live in a computer simulation. It’s also a meaningless statement." Wrong, it's far from meaningless. If we really are brains in vats, fed by thick conduits carrying encoded qualia, generated by a physical computer of some sort, tended by insectoid alien programmers - that's non-trivial! If the computer happens to generate those inputs by solving Standard Model equations, so what?

You assume sim must use the most advanced technology we ignorant naked apes can (sort of) understand at this stage of our development: digital quantum computers. But we have no idea what powerful computing resources might be available to us 100 years from now, much less a million. We can't constrain hypothetical alien programmers to our primitive techniques.

Although you're aware that only human experience needs to be simulated, you seem to think in order to do that, simulation of the entire universe, at all scales, must ultimately be involved. As you show, with primitive digital quantum computers, that probably can't be done. But your assumptions are flawed.

The solution to the "apparent contradiction" mentioned above may be: you're really not debating whether sim is possible, logical, or reasonable. Instead, you're investigating whether we can prove it or not: whether it's a scientific hypothesis, in the Popperian sense. But before going on I'll wait for response (if, indeed, you consider it worthwhile). BTW I don't "believe in" sim: although there's absolutely no evidence against it, there's not much for it either. Thanks for your time.
3:04 PM, March 21, 2017
Sabine Hossenfelder said...
George,

"Although you're aware that only human experience needs to be simulated, you seem to think in order to do that, simulation of the entire universe, at all scales, must ultimately be involved. "

What I am saying is, if you believe you can get away with not simulating parts of the universe when some of your simulated consciousnesses - whose actions you can't predict - aren't trying to probe them, I want to see how you do this. Unless you demonstrate that, it's just bla-bla but nothing any scientist can take seriously. The idea that you can somehow produce an explanation for all our observations that does not require the laws of nature to be how we have extracted them today is an extraordinary claim and nobody should accept it without a very solid backing. Best,

B.
3:10 AM, March 22, 2017
Topi Rinkinen said...
Hi Bee,

Let's assume the universe could be described with set of equations, and an initial condition. Then there (outside of our universe) can be a number of computers which can run the simulation and all of them would get the same result (assuming the functions are predeterministic).

If the number of computers running this simulation is more than 1, could someone inside the simulation have means to check out how many computers there are runnig the simulation? I guess not.

If a number of those computers are running the simulation in synchronization (so that you and me are writing excactly the same letters in same their-time), and then one by one those computers are shut down. Would we notice any difference? Even when the last computer is shut down? I believe not.

What if the equations are such, that the simulation can be done in kind of frequency domain (as opposed to time domain). What would be the point in time (in the simulating computer's wolrd time) that I press this T-letter? It's an ill-defined question.

If the frequency domain simulation was such that it could be distributed to several computers, which could run in parallel or totally in different times. How would it affect our capability of detecting the simulation? I'd say we couldn't sense it.

What if someone invented the equations, and started simulation in frequency domain, but aborted it after running a small part of it. Would we exist, and could we feel the simulation is incomplete? I say no, even if the small part is like one millionth of the whole simulation.

And the logical next question is, is there a need to start the simulation for us to feel our own existence? Or is there a need to "invent" the equations in the first place, for our everyday feelings?

...

Редактирано от Mod vege на 13.04.17 02:37.




Страници по тази тема: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | (покажи всички)
Всички темиСледваща тема*Кратък преглед
Клуб :  


Clubs.dir.bg е форум за дискусии. Dir.bg не носи отговорност за съдържанието и достоверността на публикуваните в дискусиите материали.

Никаква част от съдържанието на тази страница не може да бъде репродуцирана, записвана или предавана под каквато и да е форма или по какъвто и да е повод без писменото съгласие на Dir.bg
За Забележки, коментари и предложения ползвайте формата за Обратна връзка | Мобилна версия | Потребителско споразумение
© 2006-2024 Dir.bg Всички права запазени.