Every year, Edge.org publishes responses of about 100-150 scientists, philosophers, public intellectuals (and alas some, cranks) to a single question. It’s generally an interesting read, previous posts are about the 2011 question (What is the most important invention in the past 2000 years?) here and the 2012 question (What is Your Favorite Deep, Elegant or Beautiful Explanation?) here.
The 2013 question is What SHOULD we be worried about?, which is a slightly different format. Summary is here. This year’s entries were also discussed on the SGU podcast here and the team were pretty disappointed, bringing up a few good points about the format. Most importantly, the platform where everyone gives a different answer biases people to give clever-sounding or interesting-sounding answers rather than genuine answers. Although if this is a bid to make your answer stand out, it was easy to fail — at least 6 people gave the meta-answer of “we worry too much”.
Also, a lot of the worries seemed silly and I was amazed at how many of them were the standard get-off-my lawn WHAT IS SOCIAL MEDIA DOING TO US???? Here’s a list of worries that I thought were particularly luddite and reactionary — and this stuff is coming from some very intelligent people:
- Internet drivel and the cultural loss of writing personal letters — apparently this has been a worry for centuries.
- Someone’s worried that Google now has more semantic elements in search results as opposed to looking at just the text (as if it ever looked at just the text?) and this means that someone else is the arbiter of what information is relevant/quality/true. This is the one I’m closest to agreeing — we should always be concerned about gatekeepers. However, acting like we didn’t have gatekeepers before and it’s only happening now — because Internet! — isn’t helping.
- Technology has changed our perception of time and patience
- The coming underpopulation bomb — I don’t think the projections are that certain for us to be worried about it more than how to feed the currently-growing population.
- Immortality — to reply I’ll quote this post: “At another point in the discussion, a man spoke of some benefit X of death, I don’t recall exactly what. And I said: ‘You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing. But if you took someone who wasn’t being hit on the head with a baseball bat, and you asked them if they wanted it, they would say no. I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.’”
- Technology is endangering democracy
- We’re becoming too connected
- Augmented reality
- Social Media: the more together, the more alone — and it’s warping the brains of OUR CHILDREN!
- The plot of Idiocracy will come true — this crap was touted by a psychology professor
- The loss of lust — we’ll become extinct by wathing TOO MUCH ONLINE PRON!!1!
- The end of hardship innoculation (iPhones are too convenient)
- Augmented reality and not being able to tell it from reality
- The consequences of electronics
There were 155 responses so these 14 represent 9%. The ones that I thought were spot on fell into two related categories. The first was existential risks. This is stuff that’s actually a worry since it could lead to if not extinction then suffering and death on a mass scale. Scarcity of water, food for a growing population, environmental collapse, energy crises, global warming. But a large number of responses (12/155) were worries about the general state of science (including the peer review process, political interests) as well as the public understanding of science (or lack thereof, including pseudoscience and science in the media). The two are of course directly related. Given the amount of misinformation and crap about global warming, especially in the last five years, we can see a direct correlation between the scientific process, understanding science and being unable and unwilling to take steps to prevent disaster.
This is a huge fucking worry and if you’re not extremely worried about both of these, you should read more and get more worried. And take action.
The answer I would have included is from Isaac Asimov. He might have been a sexually harassing asshole but he did have a point about the existential risk of an interconnected society. This is from his novel The Caves of Steel. Take it away Asimov:
“A City like New York must spend every ounce of effort getting water in and waste out. The nuclear power plants are kept going by uranium supplies that are constantly more difficult to obtain even from the other planets of the system, and the supply needed goes up steadily. The life of the City depends every moment on the arrival of wood pulp for the yeast vats and minerals for the hydroponic plants. Air must be circulated unceasingly. The balance is a very delicate one in a hundred directions, and growing more delicate each year. What would happen to New York if the tremendous flow of input and outgo were to be interrupted for even a single hour?”
“It never has been.”
“Which is no security for the future. In primitive times, individual population centers were virtually self-supporting, living on the produce of neighboring farms. Nothing but immediate disaster, a flood or a pestilence or crop failure, could harm them. As the centers grew and technology improved, localized disasters could be overcome by drawing on help from distant centers, but at the cost of making ever larger areas interdependent. In Medieval times, the open cities, even the largest, could subsist on food stores and on emergency supplies of all sorts for a week at least. When New York first became a City, it could have lived on itself for a day. Now it cannot do so for an hour. A disaster that would have been uncomfortable ten thousand years ago, merely serious a thousand years ago, and acute a hundred years ago would now be surely fatal.”