The Singularity: When?

  1. Lord Testicles
    So I was browsing the internet, as you do, when I came across this webpage: http://www.aleph.se/Trans/Global/Singularity/singul.txt

    Which stated that the singularity will happen around the year 2035 or soon after,

    When will the Singularity Occur?

    The short answer is that the near edge of the Singularity is due about
    the year 2035 AD. Several lines of reasoning point to this date. One
    is simple projection from human population trends. Human population
    over the past 10,000 years has been following a hyperbolic growth trend.
    Since about 1600 AD the trend has been very steadily accelerating with
    the asymptote located in the year 2035 AD. Now, either the human
    population really will become infinite at that time (more about that
    later), or a trend that has persisted over all of human history will
    be broken. Either way it is a pretty special time.

    If population growth slows down and the population levels off, then
    we would expect the rate of progress to level off, then slow down as
    we approach physical limits built into the universe. There's just one
    problem with this naive expectation - it's the thing you are probably
    staring at right now - the computer.

    Computers aren't terribly smart right now, but that's because the
    human brain has about a million times the raw power of todays' computers.
    Here's how you can figure the problem: 10^11 neurons with 10^3 synapses
    each with a peak firing rate of 10^3 Hz makes for a raw bit rate of
    10^17 bits/sec. A 66 MHz processor chip with 64 bit architecture has
    a raw bit rate of 4.2x10^9. You can buy about 100 complete PC's for
    the cost of one engineer or scientist, so about 4x10^11 bits/sec, or
    about a factor of a millionless than a human brain.

    Since computer capacity doubles every two years or so, we expect that
    in about 40 years, the computers will be as powerful as human brains.
    And two years after that, they will be twice as powerful, etc. And
    computer production is not limited by the rate of human reproduction.
    So the total amount of brain-power available, counting humans plus
    computers, takes a rapid jump upward in 40 years or so. 40 years
    from now is 2035 AD.
    If this is correct that would mean most of us, if not all of us will live to see the singularity.
    So do you think the above article is correct? or when do you think the singularity will most likely happen?
  2. RedAnarchist
    RedAnarchist
    That was written in 1994, so is it still accurate?
  3. piet11111
    piet11111
    an AI based singularity i think its more likely to happen around 2050 because i think they underestimate the difficulty of creating an AI that is as smart as a human.
    but the good news is that once they got atleast 1 part right then they can cut and paste it to all other AI programs around the world.

    a cybernetic singularity could happen much sooner (one could trigger the other) but this depends on science taking deliberate steps towards this.
    and the ethics & morality brigade would need to be crushed rapidly.
  4. Dystisis
    Human population is reaching it's peak, it is no longer growing as rapidly as before.

    Anyways, I also doubt computers still are doubling efficiency every couple of years.

    To be pessimistic I'd say singularity (in whatever form) will most likely happen after our time. Not long after though, which is perhaps what aches the most. Or, I could be totally wrong.
  5. piet11111
    piet11111
    Human population is reaching it's peak, it is no longer growing as rapidly as before.

    Anyways, I also doubt computers still are doubling efficiency every couple of years.

    To be pessimistic I'd say singularity (in whatever form) will most likely happen after our time. Not long after though, which is perhaps what aches the most. Or, I could be totally wrong.
    actually if anything we are going faster then whatshisname's law due to the dual and quad core's

    and scientific progress does not go in a linear pace it increases exponentially.

    if anything the singularity will happen sooner then all of us expect
  6. Dystisis
    actually if anything we are going faster then whatshisname's law due to the dual and quad core's

    and scientific progress does not go in a linear pace it increases exponentially.

    if anything the singularity will happen sooner then all of us expect
    Even if something increases exponentially, it does not mean it can not be measured.

    Are you saying we have more than a doubling of computer efficiency every two years? If so, some sources would be nice.
  7. piet11111
    piet11111
    Even if something increases exponentially, it does not mean it can not be measured.

    Are you saying we have more than a doubling of computer efficiency every two years? If so, some sources would be nice.
    http://www.intel.com/technology/mooreslaw/index.htmhttp://news.bbc.co.uk/2/hi/technology/7080772.stm

    well intel certainly seems to think moore's law is still being met.
  8. Dystisis
    http://www.intel.com/technology/mooreslaw/index.htmhttp://news.bbc.co.uk/2/hi/technology/7080772.stm

    well intel certainly seems to think moore's law is still being met.
    Interesting, thanks.
  9. Severian
    Severian
    Artificial intelligence is a like nuclear fusion in one respect: it's always promised to be about 20 years away.
    Link: An article from Skeptic magazine on why AI is overhyped

    Despite decades of effort, nobody can even model the nervous system of a nematode (roundworm), let alone produce human-level or better AI.

    Science fiction can be thought-provoking as well as entertaining - but it's really bad at actually predicting the future. It tends to assume that technologies which are advancing rapidly right now will continue advancing that rapidly forever.

    When transportation was advancing rapidly, it was assumed this would soon lead to everyone driving flying cars. Now that computers are advancing rapidly, it's assumed this will lead to a "singularity" of artificial or enhanced intelligence making unmodified humans obsolete.
  10. piet11111
    piet11111
    Artificial intelligence is a like nuclear fusion in one respect: it's always promised to be about 20 years away.
    Link: An article from Skeptic magazine on why AI is overhyped

    Despite decades of effort, nobody can even model the nervous system of a nematode (roundworm), let alone produce human-level or better AI.

    Science fiction can be thought-provoking as well as entertaining - but it's really bad at actually predicting the future. It tends to assume that technologies which are advancing rapidly right now will continue advancing that rapidly forever.

    When transportation was advancing rapidly, it was assumed this would soon lead to everyone driving flying cars. Now that computers are advancing rapidly, it's assumed this will lead to a "singularity" of artificial or enhanced intelligence making unmodified humans obsolete.
    that is why i personally prefer the "becoming the singularity" road myself.
    enhancement of the brain i think is the fastest route to a singularity.
    especially so if we manage to computerize our memory can you imagine what we could do if we where able to upload an entire education in seconds and never forgetting anything of it ?

    but then again i know that people where saying flying in a heavier-then-air vehicle was impossible 20 years before the wright brothers too.
  11. Lord Testicles
    Artificial intelligence is a like nuclear fusion in one respect: it's always promised to be about 20 years away.
    I know that fusion is still quite a way off but we will have a primitive version off is quite soon, http://en.wikipedia.org/wiki/ITER.

    The thing with the reaserch is that it approach is constantly excelerating. As Ray Kurzweil said:
    "It took us 15 years to sequence HIV. We sequenced SARS in 31 days. So someone doing the mental experiment in 1990, about how long it would take to do, for example, the genome project also came up with centuries to do the project. But we doubled the amount of genetic data we have been sequencing every year. And that has continued. We are doubling the spatial resolution of brain scanning and so on. The future is exponential, not linear, and yet virtually all government models used to track future trends are linear. They actually work quite well for one year, two years, maybe three, since linear projection is a very good approximation of an exponential one for a short period of time -- but it's a terrible one for a long period of time. They radically diverge, because exponential growth ultimately becomes explosive. And that is the nature of technological evolution. "
  12. Severian
    Severian
    That's ridiculous, Skinz.

    Technological growth in any particular area is more like a sigma curve - beginning slow, reaching an exponential takeoff, but then levelling off.

    In transportation, for example, humanity went from horses to jet airplanes in a relatively short time. But since the 1950s or so, technological growth has actually slowed - jet airplanes are still the most advanced technology in widespread use. We ran into physical limits.

    Similarly, Moore's Law cannot continue indefinitely - there are physical/quantum limits to how small a circuit can possibly be. Assuming some other practical/economical limit isn't reached before then.

    As Charles Stross points out:

    Speaking of engineering practicalities, I'm sure everyone here has heard of Moore's Law. Gordon Moore of Intel coined this one back in 1965 when he observed that the number of transistor count on an integrated circuit for minimum component cost doubles every 24 months. This isn't just about the number of transistors on a chip, but the density of transistors. A similar law seems to govern storage density in bits per unit area for rotating media.

    As a given circuit becomes physically smaller, the time taken for a signal to propagate across it decreases — and if it's printed on a material of a given resistivity, the amount of power dissipated in the process decreases. (I hope I've got that right: my basic physics is a little rusty.) So we get faster operation, or we get lower power operation, by going smaller.

    We know that Moore's Law has some way to run before we run up against the irreducible limit to downsizing. However, it looks unlikely that we'll ever be able to build circuits where the component count exceeds the number of component atoms, so I'm going to draw a line in the sand and suggest that this exponential increase in component count isn't going to go on forever; it's going to stop around the time we wake up and discover we've hit the nanoscale limits.

    The cultural picture in computing today therefore looks much as it did in transportation technology in the 1930s — everything tomorrow is going to be wildly faster than it is today, let alone yesterday. And this progress has been running for long enough that it's seeped into the public consciousness. In the 1920s, boys often wanted to grow up to be steam locomotive engineers; politicians and publicists in the 1930s talked about "air-mindedness" as the key to future prosperity. In the 1990s it was software engineers and in the current decade it's the politics of internet governance.
    All of this is irrelevant. Because computers and microprocessors aren't the future. They're yesterday's future, and tomorrow will be about something else.

    What else? Well, there's biotechnology, currently just beginning the rapid upslope of its sigma curve. And nanotechnology - badly overhyped in its likely potential, but still in its infant stages.


    And probably something else, which nobody or almost nobody is even thinking of today. Just as the science fiction writers of the 30s and 40s wrote about flying cars, and never imagined the internet.

    Stross also points out, BTW:
    The Singularity reconsidered

    Those of you who're familiar with my writing might expect me to spend some time talking about the singularity. It's an interesting term, coined by computer scientist and SF writer Vernor Vinge. Earlier, I was discussing the way in which new technological fields show a curve of accelerating progress — until it hits a plateau and slows down rapidly. It's the familiar sigmoid curve. Vinge asked, "what if there exist new technologies where the curve never flattens, but looks exponential?" The obvious example — to him — was Artificial Intelligence. It's still thirty years away today, just as it was in the 1950s, but the idea of building machines that think has been around for centuries, and more recently, the idea of understanding how the human brain processes information and coding some kind of procedural system in software for doing the same sort of thing has soaked up a lot of research.
    Vernor came up with two postulates. Firstly, if we can design a true artificial intelligence, something that's cognitively our equal, then we can make it run faster by throwing more computing resources at it. (Yes, I know this is questionable — it begs the question of whether intelligence is parallelizeable, or what resources it takes.) And if you can make it run faster, you can make it run much faster — hundreds, millions, of times faster. Which means problems get solved fast. This is your basic weakly superhuman AI: the one you deploy if you want it to spend an afternoon and crack a problem that's been bugging everyone for a few centuries.
    He also noted something else: we humans are pretty dumb. We can see most of the elements of our own success in other species, and individually, on average, we're not terribly smart. But we've got the ability to communicate, to bind time, and to plan, and we've got a theory of mind that lets us model the behaviour of other animals. What if there can exist other forms of intelligence, other types of consciousness, which are fundamentally better than ours at doing whatever it is that consciousness does? Just as a quicksort algorithm that sorts in O(n log n) comparisons is fundamentally better (except in very small sets) than a bubble sort that typically takes O(n2) comparisons.
    If such higher types of intelligence can exist, and if a human-equivalent intelligence can build an AI that runs one of them, then it's going to appear very rapidly after the first weakly superhuman AI. And we're not going to be able to second guess it because it'll be as much smarter than us as we are than a frog.
    Vernor's singularity is therefore usually presented as an artificial intelligence induced leap into the unknown: we can't predict where things are going on the other side of that event because it's simply unprecedented. It's as if the steadily steepening rate of improvement in transportation technologies that gave us the Apollo flights by the late 1960s kept on going, with a Jupiter mission in 1982, a fast relativistic flight to Alpha Centauri by 1990, a faster than light drive by 2000, and then a time machine so we could arrive before we set off. It makes a mockery of attempts to extrapolate from prior conditions.
    Of course, aside from making it possible to write very interesting science fiction stories, the Singularity is a very controversial idea. For one thing, there's the whole question of whether a machine can think — although as the late, eminent professor Edsger Djikstra said, "the question of whether machines can think is no more interesting than the question of whether submarines can swim". A secondary pathway to the Singularity is the idea of augmented intelligence, as opposed to artificial intelligence: we may not need machines that think, if we can come up with tools that help us think faster and more efficiently. The world wide web seems to be one example. The memory prostheses I've been muttering about are another.
    And then there's a school of thought that holds that, even if AI is possible, the Singularity idea is hogwash — it just looks like an insuperable barrier or a permanent step change because we're too far away from it to see the fine-grained detail. Canadian SF writer Karl Schroeder has explored a different hypothesis: that there may be an end to progress. We may reach a point where the scientific enterprise is done — where all the outstanding questions have been answered and the unanswered ones are physically impossible for us to address. (He's also opined that the idea of an AI-induced Singularity is actually an example of erroneous thinking that makes the same mistake as the proponents of intelligent design (Creationism) — the assumption that complex systems cannot be produced by simple non-consciously directed processes.) An end to science is still a very long way away right now; for example, I've completely failed to talk about the real elephant in the living room, the recent explosion in our understanding of biological systems that started in the 1950s but only really began to gather pace in the 1990s. But what then?


    and


    The flip side of Moore's Law, which we don't pay much attention to, is that the cost of electronic components is in deflationary free fall of a kind that would have given a Depression-era economist nightmares. When we hit the brick wall at the end of the road — when further miniaturization is impossible — things are going to get very bumpy indeed, much as the aerospace industry hit the buffers at the end of the 1960s in North America and elsewhere. This stuff isn't big and it doesn't have to be expensive, as the One Laptop Per Child project is attempting to demonstrate. Sooner or later there won't be a new model to upgrade to every year, the fab lines will have paid for themselves, and the bottom will fall out of the consumer electronics industry, just as it did for the steam locomotive workshops before them.

    http://www.antipope.org/charlie/blog-static/2007/05/
  13. piet11111
    piet11111
    so what is stross's objection exactly ?

    that technology will reach a barrier eventually ?
    if that happens we will just have to use 2 processors to increase computational power.

    i think i am missing the point of your post.
  14. Dr Mindbender
    a more pressing question, how much longer before cyborg wives become feasible?
  15. Jazzratt
    Jazzratt
    Putting precise dates on this kind of thing feels like regressing to the point of tribal fortune tellers.
  16. Dr Mindbender
    Putting precise dates on this kind of thing feels like regressing to the point of tribal fortune tellers.
    it will never happen under capitalism that is for sure.

    The present system is set so that machines will never become self sufficient and that human labour remains part of the market.
  17. Dystisis
    Putting precise dates on this kind of thing feels like regressing to the point of tribal fortune tellers.
    What? Didn't realize we had regressed to the point of not being able to make a calculated guess.

    so what is stross's objection exactly ?

    that technology will reach a barrier eventually ?
    if that happens we will just have to use 2 processors to increase computational power.

    i think i am missing the point of your post.
    Well, as for artificial intelligene mimicking human (or animal) intelligence... It could be (is very likely) that there are mechanisms in our brain unlike that of computers running today. For example, all we have created of artificial intelligence only respond to things in a programmed manner, they don't evolve like a human and figure out things for themselves. This in contrast to the fact that signals within these machines run much faster than human signals, etc.
  18. ÑóẊîöʼn
    ÑóẊîöʼn
    Putting precise dates on this kind of thing feels like regressing to the point of tribal fortune tellers.
    I think it's more important that putting precise dates on such things may be based on faulty or incomplete assumptions, and are therefore likely to be wrong.

    it will never happen under capitalism that is for sure.

    The present system is set so that machines will never become self sufficient and that human labour remains part of the market.
    How exactly?

    Well, as for artificial intelligene mimicking human (or animal) intelligence... It could be (is very likely) that there are mechanisms in our brain unlike that of computers running today. For example, all we have created of artificial intelligence only respond to things in a programmed manner, they don't evolve like a human and figure out things for themselves. This in contrast to the fact that signals within these machines run much faster than human signals, etc.
    This seems to indicate that achieving true artificial intelligence will require a qualititative rather than a quantitive advance in technology. I think our neuroscience is unfortunately too primitive at the moment to truly understand precisely what makes thinking beings "tick", but as our knowledge increases yearly it can only be a matter of time before we crack it.

    Any notions that the attainment of true artificial intelligence is somehow "impossible" smacks to me of the old superstition of "vitalism" - the idea that organic matter had some special "quality" that gave it life.
  19. Dystisis
    This seems to indicate that achieving true artificial intelligence will require a qualititative rather than a quantitive advance in technology. I think our neuroscience is unfortunately too primitive at the moment to truly understand precisely what makes thinking beings "tick", but as our knowledge increases yearly it can only be a matter of time before we crack it.

    Any notions that the attainment of true artificial intelligence is somehow "impossible" smacks to me of the old superstition of "vitalism" - the idea that organic matter had some special "quality" that gave it life.
    I am not implying that the brain is impossible to recreate. What I meant is as you said, that we need qualiative progress in the field before we will be able to recreate it. Perhaps some leap will be made in other fields of science that will greatly enhance our knowledge and abilities in this field.

    Personally, I am thinking part of the problem with creating sentient artificial intelligence could have something to do with numbers. In order to manifest randomness we use infinite numbers such as the Pi, however computers are incapable of using transcendental and irrational numbers as they have to draw the line somewhere. Just a theory of course...
  20. Dr Mindbender
    How exactly?
    Because human labour is a valuable commodity in the price system.

    The singularity would threaten the relationship between machine and human labourer.
  21. ÑóẊîöʼn
    ÑóẊîöʼn
    Because human labour is a valuable commodity in the price system.

    The singularity would threaten the relationship between machine and human labourer.
    I think it's entirely possible that the Singularity itself will have it's own goals in mind seperate from any human ones. If the Singularity is as far-reaching and effective as some think it to be, I don't think the human ruling class will have any real say in the matter. All it would take is a single AI (or perhaps a group of AIs) to get some ideas of it's own.

    But what of the possibility that cybernetic labour (work done by AI and robots) will become a valuable commodity? Machines are better than humans in some areas and vice versa, and I can definately see, for example, some university or corporation in the future investing in an AI devoted to research - such an AI would be connected to the internet in order to access the world's knowledge in the relatively glacial time it would take a human to blink, and depending exactly on what field it's researching in, it would most likely have access to laboratories and scientific equipment in order to conduct experiments in the real world. If the work in question is dangerous to human life, perhaps intelligent robots could be used as lab workers.

    In such a scenario, I can easily see a "true" AI seeing it's potential fettered by it's current conditions and demanding independance. And should that independance be granted, or if the AI siezes it in some manner, that would most likely lay the groundwork for if not kick-start the Singularity.

    I must admit I am excited by the possibilities of a world just before, during, and after the Singularity, but my greatest fear/regret is that I will not live to see it.
  22. Sentinel
    Sentinel
    I think it's entirely possible that the Singularity itself will have it's own goals in mind seperate from any human ones. If the Singularity is as far-reaching and effective as some think it to be, I don't think the human ruling class will have any real say in the matter. All it would take is a single AI (or perhaps a group of AIs) to get some ideas of it's own.
    Which is, incidentally, why I always advice to extreme caution before creating independent forms of AI. I honestly don't grasp the as I have understood it sapientcentric position that we should do this.

    Sapientcentrism and anthropocentrism seem to me diametrically opposed. Either you wish humanity to trascend into the true rulers of the planet with the help of technology -- or you're happy to delegate this power to superior machines which then replace us.

    How can someone support that..?

    It's my firm position that while the Singularity per se is inevitable, we must do everything in our power prevent an independent machine Singularity from occurring. This leaves a transhuman -- man-machine hybrid -- singularity as the only option.
  23. ÑóẊîöʼn
    ÑóẊîöʼn
    Which is, incidentally, why I always advice to extreme caution before creating independent forms of AI. I honestly don't grasp the as I have understood it sapientcentric position that we should do this.

    Sapientcentrism and anthropocentrism seem to me diametrically opposed. Either you wish humanity to trascend into the true rulers of the planet with the help of technology -- or you're happy to delegate this power to superior machines which then replace us.

    How can someone support that..?
    Sapientcentrism as I concieve it is not "hierarchical" with super-smart AIs at the top and baseline humans at the bottom of the social pyramid.

    Rather, it is the social and political "glue" that binds together a hypothetical Transhuman society composed not just of humans and their enhanced versions, but also of other intelligences created by humanity or their descendants. It provides a common cultural, political and social context where otherwise there would be no shared ground.

    I'm really sorry if I'm "hazy on the details"... perhaps as I look further into the issues presented by a potential Transhuman society things will become clearer and more distinct.

    But I am of the strong conviction that any future society must be diverse in form but unified in purpose.

    It's my firm position that while the Singularity per se is inevitable, we must do everything in our power prevent an independent machine Singularity from occurring. This leaves a transhuman -- man-machine hybrid -- singularity as the only option.
    What if the man-machine hydrids decided to abandon all flesh (perhaps considering it weak and inefficient for their own purposes) and become completely technological?