View Full Version : Singularity: yay or nay?
DasFapital
6th June 2012, 17:05
views of ray kurzweil? personally I think he's a bit of a douchey bullshitter and charlatan
Deicide
6th June 2012, 17:14
I personally wouldn't mind immortality and super-intelligence. It's major speculation though. I have his book ''Singularity is near'' but I haven't even read it..
Ocean Seal
6th June 2012, 17:20
In my opinion we aren't anywhere close to the singularity and we are going to have a lot of obstacles which are honestly going to impede a simple application of Moore's law.
Sperm-Doll Setsuna
6th June 2012, 17:24
Singularity is kind of quack metaphysical mumbo-jumbo in this context. Nature and mortality ought to be conquered, however. I wish I was born in a time when it was already so. Bloody chance. :thumbdown:
Tenka
6th June 2012, 17:56
I think the concept of the technological singularity is essentially meaningless, and, if the following quote from wikipedia has any truth to it:
Many Singularitarians consider nanotechnology to be one of the greatest dangers facing humanity. For this reason, they often believe that seed AI (an AI capable of making itself smarter) should precede nanotechnology.
that many singularitarians are reactionary AI fetishists.
TheRedAnarchist23
6th June 2012, 18:12
WTF is singularity?:confused::confused:
DasFapital
6th June 2012, 22:38
WTF is singularity?:confused::confused:
The idea that some point in the future humans and AI will merge and supposedly become godlike and eternal.
Permanent Revolutionary
7th June 2012, 16:58
That would be terrible.
I'm with Isaac Asimov on immortality: "There is nothing frightening about an eternal dreamless sleep. Surely it is better than eternal torment in Hell and eternal boredom in Heaven."
bricolage
7th June 2012, 17:02
singularity is a load of crap, get off the internet and noone gives two shits about it and the crack pots associated with it.
Valdyr
7th June 2012, 20:59
"The singularity" is a load of shit that no competent world-systems theorist would take seriously. As Tenka said, it's usually a form of reactionary AI fetishism. It betrays an incredibly bourgeois - or at least petty bourgeois - class consciousness inasmuch as it thinks that progress will just continue under the current system, and that we can fix all our problems with technology rather than by changing social structures.
It also rests on many extremely problematic philosophical assumptions concerning the nature of mind, science, etc. For example, the idea that the universe is a computer simulation, which is actually taken seriously in their circles.
Lenina Rosenweg
8th June 2012, 04:57
An interesting talk David Brin gave at some singularity/AI event. Brin is a liberal but most of what he said could be fit into a Marxist paradigm.
http://www.youtube.com/watch?v=ryoqtB6H5nw&feature=related
It's probably physically possible. We could use nanobots to slowly duplicate and replace our cells with ones that don't decay as fast or at all. In that manner we preserve the sense of self. There are a LOT of obstacles to overcome before we can do that, namely quickly copying DNA, making nanobots complex enough to produce cells or become them, figuring out precisely how neurons work collectively, and so on. It wouldn't allow us to live forever, because accidents will happen. I read somewhere that a person would be killed accidentally within 200 years of life, but I can't back it up. We could have backups or a collective mind, but in the former case the backup is a clone and not you, and no one would willingly submit to the latter. Also, as appealing as immortality sounds, no one would want to live forever. It would quickly become boring. Medical advances will probably allow a lifespan of 150 within the century, maybe more. I'd probably want to live to about 2-500, but not countless millennia like the pro-singularity people claim is possible.
Jimmie Higgins
8th June 2012, 06:17
In the book "Accelerando" they call it the "nerd rapture" and I think that's a good description, no offense to nerds because there's a lot of level-headed ones out there too:). The book is a sci-fi parody and critique of this idea and from what I understand the plot is basically that the singularity is reached, computer brains are vastly superior to our own and so humans are alienated from technology and the means of production all together. The AI can predict what humans will think by creating simulations of induvidual or groups and so they are able to run the economy better and create a new economy called Econ 2.0 which humans can't even understand. The AI can provide all material needs for humans and so they become a redundant life-form that is slowly ghettoized. There's a lot more to it from the small bits I've read, but that's the general economic effect of the singularity happening under capitalism. It's a straight parody of Kurzweil's ideas and I've read that Kurzweil's writings are full of stuff about capitalist economic and trade models.
Singularity FAQ (http://www.antipope.org/charlie/blog-static/fiction/accelerando/toughguide.html)
Accelerando E-book (http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando-intro.html)
At any rate I think I agree with what the parody is suggesting: technological rapture is not possible under present capitalism even if it was technologically possible. It could be that the society could stumble in this direction, but it is utopian to think that the ruling class would either allow everyone access to this level of game-changing technology or that they would even allow it to be implemented in any way like the utopians imagine.
When the idea of computer information networks linking world-wide gained wider attention, all sorts of people predicted that the World Wide Web would be this utopian plane which would radically alter our society in egalitarian ways - instead our society, capitalism, has altered what were just communication networks and reforged them to create new ways to privatize and control the spread and sharing of digital information.
Capitalism needs technological change and development, but only as far as it can profit from it directly or indirectly. This is one of the contradictions of the system - it creates new technology and new possibilities but narrows or acts as a fetter on these potential applications and new developments.
ÑóẊîöʼn
8th June 2012, 16:15
views of ray kurzweil? personally I think he's a bit of a douchey bullshitter and charlatan
Transhumanism and the singularity need a far better public advocate than that crank.
I personally wouldn't mind immortality and super-intelligence. It's major speculation though. I have his book ''Singularity is near'' but I haven't even read it..
Don't bother. Kurzweil lets his optimism overwhelm his objectivity.
In my opinion we aren't anywhere close to the singularity and we are going to have a lot of obstacles which are honestly going to impede a simple application of Moore's law.
Moore's law is indicative, but not essential. A better way is to look at the current status of self-regulating and self-improving technology and it's future trajectory.
Singularity is kind of quack metaphysical mumbo-jumbo in this context. Nature and mortality ought to be conquered, however. I wish I was born in a time when it was already so. Bloody chance. :thumbdown:
Considering that the singularity, if it happens, will be a technological event instigated by humans, how the hell is it "metaphysical"? Or are you just tossing around philosophical terms as superficially defensible snarl words? (http://www.urbandictionary.com/define.php?term=snarl%20word)
I think the concept of the technological singularity is essentially meaningless,
Then you haven't been paying attention.
and, if the following quote from wikipedia has any truth to it: that many singularitarians are reactionary AI fetishists.
How does prioritising the development of one technology over the other constitute "fetishism"?
WTF is singularity?:confused::confused:
Technological singularity (http://en.wikipedia.org/wiki/Technological_singularity)
"The term "technological singularity" was originally coined by Vinge, who made an analogy between the breakdown in our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity beyond the event horizon of a black hole."
The idea that some point in the future humans and AI will merge and supposedly become godlike and eternal.
Nope. While that is a possible outcome of the singularity, it is by no means guaranteed.
That would be terrible.
I'm with Isaac Asimov on immortality: "There is nothing frightening about an eternal dreamless sleep. Surely it is better than eternal torment in Hell and eternal boredom in Heaven."
But that's not what death is, is it? Death is the complete and utter cessation of all personality.
Once we conquer death, then we won't have to worry about eternal torment or eternal boredom, because the real material universe is far more interesting than Heaven and is most certainly not Hell.
singularity is a load of crap, get off the internet and noone gives two shits about it and the crack pots associated with it.
NO U
"The singularity" is a load of shit that no competent world-systems theorist would take seriously. As Tenka said, it's usually a form of reactionary AI fetishism. It betrays an incredibly bourgeois - or at least petty bourgeois - class consciousness inasmuch as it thinks that progress will just continue under the current system,
Then you eithre haven't been paying attention, or you think that Kurzweil speaks for all advocates of the singularity when he most certainly does not.
"progress will just continue" - we don't know that. The people saying that are guessing, and from the sounds of it their guesses are still bound up in capitalist assumptions.
and that we can fix all our problems with technology rather than by changing social structures.
The two go hand in hand.
It also rests on many extremely problematic philosophical assumptions concerning the nature of mind, science, etc. For example, the idea that the universe is a computer simulation, which is actually taken seriously in their circles.
The concept of a technological singularity doesn't "rest" on the concept of a simulated universe. In fact, if the universe was a simulation then the likelihood of a technological singularity reduces because representing one in a simulation would be computationally intensive.
Revolution starts with U
8th June 2012, 21:43
I know one thing... if we reach this singularity before we make it in deep space... forced sterilization may not be far off.
Valdyr
10th June 2012, 15:50
The concept of a technological singularity doesn't "rest" on the concept of a simulated universe. In fact, if the universe was a simulation then the likelihood of a technological singularity reduces because representing one in a simulation would be computationally intensive.
I didn't say it did, I said it rested on similar problematic philosophical assumptions. Perhaps I could make my point more general and say "problematic assumptions regarding the relationship between computation and reality" or "computation and subjectivity," etc. Certainly many of the AI and/or whole-brain-emulation-centric visions of it.
As for the point about Moore's law, I don't see how you see what you're pointing out as some kind of revelation. Obviously it's a prediction based on a supposed observed tendency, and is not "assumed" to be true in a way any other such idea isn't. But this is just making a banal point about what exactly it means to make such a prediction. The point is that they create a model of just extrapolating into the future, usually without taking qualitative social factors into account. My choice of this "Kurzweilian" example was because it was typical. Would it not be similarly reasonable for me to describe Christians as "believing that the divine took human form and died for the sins of humanity" even though there are some obscure, heterodox sects that don't really believe this?
Anyway, as a former singulatarian and "rationalist" of the Less Wrong variety myself, can you answer the following questions for me:
1. You keep saying, in response to criticisms of specific predictions made by some singulatarians, that while that is one view, it is only one. Well, what is it that separates "the singularity" from the rather unremarkable prediction that "accumulating technological changes can reach a critical mass which induces qualitative change?"
2. What is it that makes you (if you do think this) think AGI is possible?
ÑóẊîöʼn
10th June 2012, 18:35
I didn't say it did, I said it rested on similar problematic philosophical assumptions. Perhaps I could make my point more general and say "problematic assumptions regarding the relationship between computation and reality" or "computation and subjectivity," etc. Certainly many of the AI and/or whole-brain-emulation-centric visions of it.
Unless you're some kind of dualist, I'm not sure what the issue is.
As for the point about Moore's law, I don't see how you see what you're pointing out as some kind of revelation. Obviously it's a prediction based on a supposed observed tendency, and is not "assumed" to be true in a way any other such idea isn't. But this is just making a banal point about what exactly it means to make such a prediction. The point is that they create a model of just extrapolating into the future, usually without taking qualitative social factors into account. My choice of this "Kurzweilian" example was because it was typical. Would it not be similarly reasonable for me to describe Christians as "believing that the divine took human form and died for the sins of humanity" even though there are some obscure, heterodox sects that don't really believe this?
Popularity has nothing to do with it. If the popular conception of a technological singularity is based on a misconception, then it strikes me as sensible to correct it.
On the other hand, it isn't even as we have evidence for "the divine" or "sin" regardless of whether or not it ever supposedly took human form or not.
Anyway, as a former singulatarian and "rationalist" of the Less Wrong variety myself, can you answer the following questions for me:
1. You keep saying, in response to criticisms of specific predictions made by some singulatarians, that while that is one view, it is only one. Well, what is it that separates "the singularity" from the rather unremarkable prediction that "accumulating technological changes can reach a critical mass which induces qualitative change?"
I disagree that it is "unremarkable". Especially when it looks like self-reflecting intelligence is nothing special ontologically.
2. What is it that makes you (if you do think this) think AGI is possible?
Because self-reflecting intelligence, so far as we have examined it, appears to be an entirely naturalistic phenomenon. Replicating natural phenomena in an artificial manner is just a matter of working out how it happens and how to do it better and more efficiently.
ÑóẊîöʼn
10th June 2012, 18:36
I know one thing... if we reach this singularity before we make it in deep space... forced sterilization may not be far off.
How does that follow, exactly?
Revolution starts with U
11th June 2012, 03:38
If people can be immortal and still have kids populations are going to grow. Technology can take us far... but only so far. Let me say that "replicators" a la Star Trek TGU could solve the problem as well. But at some point without reps or space colonization, things will have to go into the negative.
doesn't even make sense
11th June 2012, 19:35
All I can say is that the idea of (something like) a singularity occurring in a civilization like our own scares the living shit out of me. And the sincere wank fantasies of libertarian sci-fi nerds you see pasted all around the internet do nothing to warm me up to the notion.
Valdyr
11th June 2012, 22:19
Unless you're some kind of dualist, I'm not sure what the issue is.
Popularity has nothing to do with it. If the popular conception of a technological singularity is based on a misconception, then it strikes me as sensible to correct it.
On the other hand, it isn't even as we have evidence for "the divine" or "sin" regardless of whether or not it ever supposedly took human form or not.
Your last point is completely irrelevant. Of course we don't have evidence for these things. Mine was an argument from analogy. I was saying that if that description of what Christians believe was a reasonable description of Christianity, that attacks on this idea would be a good approximate attack on most of Christianity. Similarly, if ideas like a computational theory of mind, substrate-independence, the possibility of artificial general intelligence etc. are held by most singulatarians, then it is reasonable to consider an attack on them an attack on most singulatarians, no?
I disagree that it is "unremarkable". Especially when it looks like self-reflecting intelligence is nothing special ontologically.
What is this non-sequitur? You didn't answer my question at all, you just pejoratively quoted my remark and then moved on to something else. You're displaying problematic philosophical assumptions of which I speak right now, in:
1. Assumptions of the substrate-independence of self-reflecting intelligence
2. The conflating of subjectivity with "self-reflecting" intelligence
Because self-reflecting intelligence, so far as we have examined it, appears to be an entirely naturalistic phenomenon.
But what is this self-reflecting consciousness, how has it been "examined," and why is it the same as subjectivity?
Besides, something doesn't have to be supernatural (there doesn't have to be some "spirit-stuff, a dualism of ontological "substance") to not be substrate independent.
Returning to the main point though, despite the kerfuffle over my apparently offensive use of the word "unremarkable," you still haven't actually answered my question as to what makes "the singularity" more special than any other technological paradigm shift.
Replicating natural phenomena in an artificial manner is just a matter of working out how it happens and how to do it better and more efficiently.
No it isn't. We can in one sense, but not in another sense. For example, we can replicate certain particle interactions by contriving the conditions of possibility under which those interactions would occur "without" us anyway. This is very different, however, from trying to alter the conditions of possibility for some occurrence.
For example, suppose I want to do some high-energy physics experiments. I contrive the smashing of these particles together by using technology and our knowledge of natural processes to bring about the conditions of possibility for such an occurrence. But this is very different from "replicating" the phenomenon in the sense of abandoning some of the crucial conditions of possibility.
There seem to be two main ways of "replicating the mind," either running a "mind program" or building an artificial brain. The latter I found more interesting an ambiguous, but the former I think is stupid. It is thoroughly idealist because it confuses the signifier with the signified. A perfect "modeling" of some functional structure is still not that structure; a virtual world is still exactly that, virtual in the philosophical sense. It is real, but not actual.
The case of the brain replica is more ambiguous and rests more on questions of substrate independence as well as broader questions regarding subjectivity. Whether an artificial brain could be a subject is an entirely different can of worms, but I will at least say that there gets to be a point at which the "artificial" is indistinguishable from the "natural" other than by the circumstances of its arising, namely, that is was contrived.
Kenco Smooth
13th June 2012, 09:13
There seem to be two main ways of "replicating the mind," either running a "mind program" or building an artificial brain. The latter I found more interesting an ambiguous, but the former I think is stupid. It is thoroughly idealist because it confuses the signifier with the signified. A perfect "modeling" of some functional structure is still not that structure; a virtual world is still exactly that, virtual in the philosophical sense. It is real, but not actual.
What a load of nonsense. The only games in town for explaining the mind are connectionist models or Turing type explanations (the mind as simply acting on syntactic information contained within propositions). These two are extremely limited as of today and really only get us a little way but there's no other explanation even close. And the thing about both these models is that the information they work with is subject to multiple realisability. The only thing that matters is the functional information the mind acts upon and is constructed of. In theory a steam powered mind machine is perfectly possible by our best current understanding.
Valdyr
17th June 2012, 17:18
What a load of nonsense. The only games in town for explaining the mind are connectionist models or Turing type explanations (the mind as simply acting on syntactic information contained within propositions). These two are extremely limited as of today and really only get us a little way but there's no other explanation even close. And the thing about both these models is that the information they work with is subject to multiple realisability. The only thing that matters is the functional information the mind acts upon and is constructed of. In theory a steam powered mind machine is perfectly possible by our best current understanding.
Leeb, you do not seem to be understanding the vector of that criticism. Those proposals are the two most common I see in singulatarian circles.
Anyways, I'm perfectly familiar with said debate. It's functionalism vs. connectionism. What reasons have you given me for accepting the dominant "naturalist" view other than telling me its the best current understanding? I obviously don't think so.
Can you tell me why I should accept?
1. Multiple realizability
2. The reduction of the study of mind to brain/cognitive science
3. A one-to-one relationship between individual "mind" and "subjectivity" simpliciter
MuscularTophFan
24th June 2012, 23:00
Ray Kurzweil is a very brilliant man. Everything he has ever predicted has come true. Immorality will be achieved sometime in my lifetime so all I have to do is live long enough till immorality is achieved. Since I'm currently 20 years old and in good health I think I have a good shot at living forever. It would be very interesting to live forever.
Jimmie Higgins
25th June 2012, 11:57
Ray Kurzweil is a very brilliant man. Everything he has ever predicted has come true. Immorality will be achieved sometime in my lifetime so all I have to do is live long enough till immorality is achieved. Since I'm currently 20 years old and in good health I think I have a good shot at living forever. It would be very interesting to live forever.I think it would too - or at least having the ability to continue your consciousness for as long as you wished.
But I wouldn't count on it. Make the most of the time we know we have: the present.
Hexen
25th June 2012, 19:17
In the book "Accelerando" they call it the "nerd rapture"
Let's stop right there, I would like to point out an example how deeply rooted Christianity is in our society which is what this whole "singularity" is actually about (It traces back to the belief that humanity will become one with Jesus Christ which has been secularized to a AI although literally the same thing, which before it's to clear humanity's 'damnation'/'sins' and the very same concept has been secularized to the common "Human Nature" argument...it all makes sense now).
Of course these Deus Ex: Invisible War scenes are a perfect example what I'm talking about.
xBeoreJr4Yc
zeboqg4t9vs
MuscularTophFan
25th June 2012, 21:29
I think it would too - or at least having the ability to continue your consciousness for as long as you wished.
But I wouldn't count on it. Make the most of the time we know we have: the present.
I guess for me as an atheist the concept of immorality is the only hope for me to live forever because I know all to well that the concept of an afterlife is a fairytale. It's already to late for our grandparents generation and might be to late our parents to live ever but our generation has a good shot of being the first human gender to live forever.
My fears is how to keep my brain intact. If my brain goes so goes me. We need to invest in brain stem technology. Also afraid of the concept of machines gaining consciousness of their own. Robots would replace humans and slolwy take over the universe for themselves. There are forces in the universe like black holes, asteroids, and supernovas, that we would have to deal with. So there are many things immoral humans are going to have to solve.
the concept of immorality gives me some hope to live on. Now you may ask why would you want to live forever? Better question. Would you rather be dead forever or be alive forever and retain all of the those memories you had?
Jimmie Higgins
26th June 2012, 09:27
I guess for me as an atheist the concept of immorality is the only hope for me to live forever because I know all to well that the concept of an afterlife is a fairytale. It's already to late for our grandparents generation and might be to late our parents to live ever but our generation has a good shot of being the first human gender to live forever.Of course - if there was one speculatively feasible technological development that I'd like to see happen, immortality or uploading our consciousness or whatever would be it.
I'm just not counting on it within the system because we can do all sorts of things with contemporary medical science but I still don't have any access to it as it is because of lack of health care.
My fears is how to keep my brain intact. If my brain goes so goes me.Yes, I agree.
Also afraid of the concept of machines gaining consciousness of their own. Robots would replace humans and slolwy take over the universe for themselves. There are forces in the universe like black holes, asteroids, and supernovas, that we would have to deal with. So there are many things immoral humans are going to have to solve. Well I'm less concerned about AI, which I think is probably more likely in my lifetime than immortality. I'd be concerned if AI was achieved in capitalism (for A.I. and regular consciousness alike... I'm sure the motivation for AI in the system is slave-mental-labor), but not in some post-revolutionary situation.
Artificial consciousness, in a sense would just be an extension of our own consciousness. They might become "better" and more adapatable than us, but unless they really rounded us up and killed us, if humans just died off through natural selection (rather than some robot genocide) then I'd be happy that we were able to extend our consciousness to new beings rather than just die-out because be killed each-other off or destroyed the environment or whatever. In a sense even if humans couldn't achieve immortality, AI could be a kind of immortality for the human species as a whole.
the concept of immorality gives me some hope to live on. Now you may ask why would you want to live forever? Better question. Would you rather be dead forever or be alive forever and retain all of the those memories you had?I love to live for as long as I wanted.
Danielle Ni Dhighe
26th June 2012, 10:17
It might make for an awesome work of fiction, but that's about it.
Paul Cockshott
26th June 2012, 17:38
Leeb why do you think that Turing thought AI would just be syntactic?
MuscularTophFan
26th June 2012, 22:49
Yes, I agree.
Well I'm less concerned about AI, which I think is probably more likely in my lifetime than immortality. I'd be concerned if AI was achieved in capitalism (for A.I. and regular consciousness alike... I'm sure the motivation for AI in the system is slave-mental-labor), but not in some post-revolutionary situation.
What if AI reaches such an advanced stage that they actually obtain self conciseness? I mean the human brain is nothing more than a living computer. What if AI reaches a point where it can randomly come up with an idea out of nowhere like a human?
Artificial consciousness, in a sense would just be an extension of our own consciousness. They might become "better" and more adapatable than us, but unless they really rounded us up and killed us, if humans just died off through natural selection (rather than some robot genocide) then I'd be happy that we were able to extend our consciousness to new beings rather than just die-out because be killed each-other off or destroyed the environment or whatever. In a sense even if humans couldn't achieve immortality, AI could be a kind of immortality for the human species as a whole.
What's the point of AI having immorality if there is nothing alive to know about it? The problem is that I'm still dead forever. For example if someone goes though a teleporter that person dies. They are gone forever. The person that comes out of a teleporter is identical to the the person who went in that teleporter in every way except for the fact that your conciseness is gone forever.
I don't think humans will die of naturally. Humans are differant from all other species on earth. We can spacecrafts that can take us to distant habitable planets. So even if all live on earth is killed off by an asteroid or black hole we would still have humans living on different planets.
I guess the best solution is if humans and robots merged into some kind of cyborg. Humans would have to shed off their organic form and become some kind of god like thing or things or something. What would be the thoughts of an immoral being be like? Being able to see the stars in the sky go supernova over the centuries would be beyond imagination. Would I eventually lose memories of my earlier life if I was immoral? Should humans continue to produce offspring? How would a society of immoral beings function like? These are some questions I think humans would need to answer when we obtain immorality. Also religious people would probably take very hostile towards the news that humans could live forever, because religion really has no purpose of existing if humans can live forever. If humans become gods what's the point of religion? So there would probably be some kind of religious terrorist insurgency because of this and religion would be banned afterwords.
I love to live for as long as I wanted.
That's why we need to invest all of our technology in fighting diseases, nanotechnology, and stem cell research.
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.