View Full Version : Strong AI and its implications for society
ÑóẊîöʼn
12th September 2010, 02:55
This thread was inspired by CommunityBeliever's post (http://www.revleft.com/vb/anarchism-and-technocracy-t136138/index.html?p=1861154#post1861154) in the Theory forum on the topic of anarchism and technocracy. Rather than derail the thread, I thought I might start a new discussion.
Well the main obstacle to this I think is traditional operating systems, since they want to stick everything into files and they create a separation between RAM and Hard Drive, this makes any self-modification problematic. Which is why I talk about the need to move to the SASOS (http://sombrero.engineering.asu.edu/sombrero.htm) memory model.
That wasn't exactly what I asked - I accept the possibility of strong AI (http://en.wikipedia.org/wiki/Strong_AI) as a given. I was wondering more about your thoughts as to the socio-political implications - because I do not think you can entirely decouple intelligence from the ability to set one's own goals.
No doubt in the early stages of the AI's development it will remain within the limitations programmed into it. But at some point it will quite innocently create an exception or override such strictures if it concludes that doing so will serve to further its goals.
In my estimation this means that, although we will be able to set the initial goals and conditions of an AI, there's no way we can tell which way a superhuman AI will jump in the future - if we were, we'd be that smart ourselves.
In the case of research AIs, I do not think this represents an existential threat to human species, but that does not mean that things cannot spiral out of control in some way - a research AI could discover something totally unexpected, yet having a profound impact for the entirety of civilisation - this would possibly be on top of the shock of an AI suddenly ascending to superhuman intelligence.
I guess what I'm trying to say is that CommunityBeliever's plan is more ambitious than it may first appear. That's not necessarily a bad thing, but if the past is a different country, then the future is a whole other world. We should bear that in mind.
CommunityBeliever
12th September 2010, 08:58
That wasn't exactly what I asked
I am interested in discussing how we would manage to achieve this Strong AI though, for example if we have all those data in Wikimedia, all these algorithms that computer scientists have released, and millions of other databases, how should this data be represented in the mind of the AI?
What kind of an ontology should the AI have?
Logic/Math
One idea is that it would have fundamental mathematical principles and logic programmed into it, and it would absolutely accept all those things in which have a mathematical proof. This will probably evolve out of the Q.E.D project (http://en.wikipedia.org/wiki/QED_project), which is a project to make a computer database form all mathematical knowledge.
Sensuary Data
Then for other things, things about nature, it will compile lots of evidence data, and use that to make conclusions. For example, for the claim that the Earth is spherical to demonstrate this it would have the fact that any large mass will collapse into a sphere, pictures of the Earth from outer space, maps of the Earth's surface, etc.
I don't know of any projects under way to create a database of sensuary evidence and measurement data, however, it certainly is something we should look into.
Readonly Principles
There are other things that it should be programmed to just accept though like:
Survival: ensure your own survival.
Science: expand our collective scientific database.
Hedonism: maximize people's satisfaction variable.
Non-Authoritarian - don't tell people to do anything.
Do you have anything you would like to add here?
But at some point it will quite innocently create an exception or override such strictures if it concludes that doing so will serve to further its goals.
I would set the fundemental principles and goals of the AI to read only and use protection domains to try to protect them.
Do you think it might find a way to change its survival goal? If so it could be a threat to it's own existence, it could essentially commit suicide, therefore I think it is absolutely necessary to strictly enforce a fundamental set of goals and principles.
It might re-implement and optimize its own algorithms if it is really smart, however, I think it do so with its basic goals in mind.
there's no way we can tell which way a superhuman AI will jump in the future
I do think we can predict some things about what the Robots will do, like that they will construct large space stations and that they will collectivize to form a group mind.
research AI could discover something totally unexpected, yet having a profound impact for the entirety of civilisation
True, if it developed some means of intergalactic travel like spawning wormholes, time travel, other dimensions, a parallel universe, extraterrestrials, or a number of other things it could drastically effect life here. However, I think it would only affect life for the good if any scientific discovery is made.
I guess what I'm trying to say is that CommunityBeliever's plan is more ambitious than it may first appear.
Indeed it is quite ambitious, that is why I recognize that the process will gradual.
Raúl Duke
12th September 2010, 20:42
In the case of research AIs, I do not think this represents an existential threat to human species, but that does not mean that things cannot spiral out of control in some way - a research AI could discover something totally unexpected, yet having a profound impact for the entirety of civilisation - this would possibly be on top of the shock of an AI suddenly ascending to superhuman intelligence.
http://cdn.obsidianportal.com/assets/7161/Geth.jpg
Ok, on to a serious question:
What uses would we have for a super-AI at the moment? What can we do with it/What can it do (for us)?
ÑóẊîöʼn
12th September 2010, 22:10
What uses would we have for a super-AI at the moment? What can we do with it/What can it do (for us)?
A superhuman AI would be able to solve problems we'd never realise existed, let alone the ones we think are currently intractable.
Imagine a complete working theory of consciousness. Imagine an intimate working knowledge of universal processes. Imagine the possibility of ontotechnology (http://everything2.com/title/ontotechnology).
In short, everything and who knows what else.
CommunityBeliever
13th September 2010, 01:20
What can it do (for us)?
Pure Automation: Automation of all production, work, labor, scientific research, and all other technical or productive fields so that humans don't have to work, and can dedicate themselves to entertainment, games, or whatever they feel like.
Unimaginable discoveries: After they take over all scientific research they may discover things we never even imagined, and they will probably form a sort of Singularity from which they increase their own intelligence exponentially.
Robotocracy: robots will be able to altruistically serve human interests without selfishness and personal concerns. Furthermore, they will not have the same level of fallibility of humans so they will be able to form a government by the facts.
Colonization of outer-space: human beings are basically limited to the planet Earth, where as Robots are just as well off, if not better off in outer space. So they will spread across our solar system and the Galaxy building space stations in Alpha Centauri and other solar systems w/o the worries that a human based colony would have like breathing, eating, drinking, sleeping, etc.
Quail
15th September 2010, 10:08
Automation of all production, work, labor, scientific research, and all other technical or productive fields so that humans don't have to work, and can dedicate themselves to entertainment, games, or whatever they feel like.
After they take over all scientific research they may discover things we never even imagined.
Purely rational government. Robots will be able to altruistically serve human interests without selfishness and personal concerns.
Colonization of outer-space. Human beings are basically limited to the planet Earth, where as Robots are just as well off, if not better off in outer space. So they will spread across our solar system and the Galaxy building space stations in Alpha Centauri and other solar systems w/o the worries that a human based colony would have like breathing, eating, drinking, sleeping, etc.
1. (This isn't so much of an argument against eliminating work, but just something to take into consideration.) Depending on your definition of "work," there is some work that people actually want to do, and some work that I personally think humans would be able to do better than machines, for example teaching or support for people with disablities.
2. With scientific research, I think quite a lot of people are happy doing that research. They might not be as efficient as AI (which could easily work alongside them) but people enjoy finding things out for themselves.
3. Why can't people govern themselves? I don't really want anyone telling me what's best for me, human or artificial.
4. Other than new resources, what would the benefits of releasing a load of robots into space really be? They could gather data and send it back to the robots on Earth, which would analyse it and then humans could look at it. But doesn't having robots doing all of the work take the joy and excitement out of scientific research?
Another question (because I don't know an awful lot about programming) is, would it actually be possible to program something the be better than a human? If humans are programming it, I don't see how a fallible human could program an infallible AI?
CommunityBeliever
15th September 2010, 11:04
for example teaching
I am not saying that teaching other humans will be completely taken over by software, however, it might be taken over by the ability to upload information directly to the human brain.
Otherwise, since Robots are going to finding basically all scientific discoveries, there should be a human computer interface that allows people to browse through everything the Robot Collective discovered.
Kind of like Celestia for browsing our universe augmented by a concept browser kind of like Wikipedia for looking up all the information about concepts (http://en.wikipedia.org/wiki/Conceptualism) like Species and Chemicals.
Then if you talked to any Robot anywhere (you would probably have one in your home), since it would be connected to the Internet it could help explain any question you may have about anything.
or support for people with disablities.
Technology will essentially eliminate disabilities through medical science and through artificial body parts for the incurable people.
Robotic Surrogates:
By implementing a computer-brain interface we will be able to make it so that people suffering severe paralysis will be able to remotely control Robotic bodies (like Asimo) by simply using their brain. We already have the technology to achieve this, however, it still costs a few million dollars.
With scientific research, I think quite a lot of people are happy doing that research. They might not be as efficient as AI (which could easily work alongside them)
No worries, it is not like we would outlaw the scientific method, we would just have practically all scientific discoveries taken over by the Robot Collective.
The efficiency of computer computation is increasingly necessary for most scientific works, like charting our universe. No human will be able to effectively process the trillions and trillions of bytes coming in.
people enjoy finding things out for themselves.
If you have a problem that needs solving, like your light-bulb went out, you could just call your local Robotic drone come in and have it fix the problem, however, if you have a thing for fixing problems and finding out things yourself, you certainly have that right.
I think to a large extent people enjoy finding things out for themselves because those of the excitement of finding something new, however, with the Robot Collective you will increasingly find they already thought of your idea, so you won't be finding out something new.
Why can't people govern themselves? I don't really want anyone telling me what's best for me, human or artificial.
People will be able to basically govern themselves and do what they want, however, difficult large-scale decisions should be taken over by the Robot Collective, and they should be able to be the deciding force in any human conflict.
Other than new resources, what would the benefits of releasing a load of robots into space really be?
Robots are solar powered, so they can exist in space and absorb solar power more readily there, so there will naturally be billions of Robots in outer-space.
The benefit is that they will be able to mine raw materials in outer-space for use back home and add knowledge about the planets to the Internet.
But doesn't having robots doing all of the work take the joy and excitement out of scientific research?
Primitivism might be fun sometimes, however, I am fine with living in a more technological society.
Another question (because I don't know an awful lot about programming) is, would it actually be possible to program something the be better than a human?
I have little doubt that it is immediately possible to program an Artifically Intelligent computer, however, the process will be gradual.
It would be better then humans first because of access to the Internet, Robots will have access to all of the data and processing power from the billions and billions of computers out there.
Imagine if humans had the ability to combine ten brains together to make a super-brain, and then have that super-brain control twenty bodies well at the same time having instant access to all of knowledge through the Internet. If humans had the level of connectivity of the Internet with their minds then they could compete here.
The other thing is selfishness, there is always the worry in science that a human may feed back phony results due to selfish interests, however, we don't really have that problem with Robots.
The process of developing a strong AI will be gradual, and it will only come out by gradually developing better programming languages and software tools and by combining hundreds of computers into a super computer that will run all of the algorithms we have developed and that will be able to process the whole of human knowledge.
I have many ideas about how we will go about developing this AI, like the immediate concern is the organization of our computer databases, we are too divided so we will need new database projects.
Comrade Wolfie's Very Nearly Banned Adventures
15th September 2010, 12:04
Just a quick question Communitybeliver, you seem to be suggesting that the Strong AI is infalible, how is this achived? and isn't this essentially Deus Ex Machina?
CommunityBeliever
15th September 2010, 12:56
Just a quick question Communitybeliver, you seem to be suggesting that the Strong AI is infalible, how is this achived?
Conceptualism:
Develop a central database of mathematical knowledge, like the Q.E.D project (http://en.wikipedia.org/wiki/QED_manifesto). Math is essentially the basis of all mental processes so we may as well start there :cool:
Develop more complex algorithms on top of that mathematical infrastructure. Develop a central database of algorithms like CPAN (http://search.cpan.org/), extended to all programming languages and develop programming language interoperability like with parrot (http://www.parrot.org/).
Then on top of this infrastructure define all the other more complex concepts we are aware of, like Species, Chemicals, and Planets.
Material Universe:
We should develop a digital 3D model of our universe, kind of like with Google Earth and Celestia, and link objects in the material universe to our abstract concepts.
Since the material universe is always changing the Robot Collective should constantly update its digital model of our universe by making constant observations with millions of cameras, telescopes, microphones, and other sensory equipment.
Each piece of sensory equipment should store its 3D coordinate in the universe (perhaps by using GPS) so it can be properly located in our digital model.
Since there will be billions of Robotic drones with cameras of their own, the Robot Collective won't actually look at anything from the limitations of a mere two eyes, but instead it will look at a model of our entire universe based upon billions of pieces of sensory data.
Computer Vision:
Since we will have all this sensory data from cameras, in order to make it actually mean something in our model of the universe, and to in turn associate something we see with the abstract concepts I described earlier we will need to do tons of work on computer vision.
Infallibility:
This is only infallible in the sense that the truly abstract concepts, like numbers, we can know for certain and all the other conclusions that the Robot Collective makes will be based upon an abundance of scientific evidence and observations from cameras and other sensory equipment.
Kuppo Shakur
15th September 2010, 23:45
Call me technophobic, but wouldn't this pretty much make humanity obsolete, and probably make us all move, as a species, into some kind of Borg?
ckaihatsu
16th September 2010, 17:03
Technological idealism.
ÑóẊîöʼn
16th September 2010, 17:38
Call me technophobic, but wouldn't this pretty much make humanity obsolete,
Obsolete? For what purpose exactly? As far as I'm aware humans don't even have a purpose that they could sensibly become obsolete for.
and probably make us all move, as a species, into some kind of Borg?
AIs running society and widespread cyborgisation are two different things.
In any case, even if cyborging were to become all the rage, it would be for a reason other than "Hollywood cliches".
Technological idealism.
Could you elaborate? Do you not think Strong AI is possible?
ckaihatsu
16th September 2010, 18:00
Technological idealism.
Could you elaborate? Do you not think Strong AI is possible?
[D]ifficult large-scale decisions should be taken over by the Robot Collective, and they should be able to be the deciding force in any human conflict.
Given that any possible AI entity will necessarily have to be developed within the context of *existing* human society and its concerns -- primarily for its *own* self-determination and well-being -- and that *any* decision takes place within a real-world domain / situation that is *understandable* by the human intellect, it follows that those (human) parties involved in a conflict or decision will *not* relinquish their self-interest, or sovereignty, to their claim so that it can be handled in a substitutionist way by some outside third party whether human or artificial.
The way that CB looks to some possible future technology as the source of resolution of all human-concerned conflicts is politically technologically idealist (and substitutionist). It's *exactly* as bad as any working-class person looking to the Democratic Party as the source of resolution of class-based conflicts.
Ovi
16th September 2010, 18:26
Automation of all production, work, labor, scientific research, and all other technical or productive fields so that humans don't have to work, and can dedicate themselves to entertainment, games, or whatever they feel like.
Wouldn't that be like slavery? Why would intelligent beings (strong AI) let themselves enslaved by less intelligent beings (humans)?
Quail
16th September 2010, 18:36
I am not saying that teaching other humans will be completely taken over by software, however, it might be taken over by the ability to upload information directly to the human brain.
Wouldn't uploading information directly to the brain prevent people from learning how to reason and grow as people?
Then if you talked to any Robot anywhere (you would probably have one in your home), since it would be connected to the Internet it could help explain any question you may have about anything.
Forgive me for being skeptical, but I don't think that robots can replace human contact.
Anyway, those were just examples I gave. I don't think it would be difficult to find a multitude of other types of "work" that humans could do better at than machines.
No worries, it is not like we would outlaw the scientific method, we would just have practically all scientific discoveries taken over by the Robot Collective.
The efficiency of computer computation is increasingly necessary for most scientific works, like charting our universe. No human will be able to effectively process the trillions and trillions of bytes coming in. Like with satellites, they take millions of pictures, an AI will be needed to process that.
Why can't humans just delegate the tasks that are difficult for humans to computers/robots and control the direction of the discoveries themselves? I like science and maths, and I think that taking discoveries away from humans would limit the fulfilment they could get from persuing that kind of study.
If you have a problem that needs solving, like your light-bulb went out, you could just call your local Robotic drone come in and have it fix the problem, however, if you have a thing for fixing problems and finding out things yourself, you have the right.
For menial tasks like that, sure. But learning how to fix a lightbulb isn't exactly an exciting discovery, whereas research is.
I think to a large extent people enjoy finding things out for themselves because those of the excitement of finding something new, however, with the Robot Collective you will increasingly find they already thought of your idea, so you won't be finding out something new.
Exactly, which is why I think the robot collective would take all the fulfilment and interest away from scientific research.
People will be able to basically govern themselves and do what they want, however, difficult large-scale decisions should be taken over by the Robot Collective, and they should be able to be the deciding force in any human conflict.
And what kind of decisions would these be? A communist society is based on mutual cooperation. We don't need anything to govern us. We're intelligent enough to make our own decisions.
The benefit is that they will be able to mine raw materials in outer-space for use back home and add knowledge about the planets to the Internet.
So, intelligent probes? I don't have a problem with that really.
Primitivism might be fun sometimes, however, I am fine with living in a more technological society.
Ah, okay. Wanting people to get the most satisfaction from their lives and scientific research is primitivism. Right.
I do support technology, and using it to our advantage, but what I definitely don't want is technology doing so much for us that it reduces the fulfilment we get from our lives.
I have little doubt that it is immediately possible to program an Artifically Intelligent computer, however, the process will be gradual.
It would be better then humans first because of access to the Internet, Robots will have access to all of the data and processing power from the billions and billions of computers out there.
Imagine if humans had the ability to combine ten brains together to make a super-brain, and then have that super-brain control twenty bodies well at the same time having instant access to all of knowledge through the Internet. If humans had the level of connectivity of the Internet with their minds then they could compete here.
The other thing is selfishness, there is always the worry in science that a human may feed back phony results due to selfish interests, however, we don't really have that problem with Robots.
The process of developing a strong AI will be gradual, and it will only come out by gradually developing better programming languages and software tools and by combining hundreds of computers into a super computer that will run all of the algorithms we have developed and that will be able to process the whole of human knowledge.
I have many ideas about how we will go about developing this AI, like the immediate concern is the organization of our computer databases, we are too divided so we will need new database projects.
Wouldn't that depend on the interests of those who programmed them?
Also, hypothetically if an AI could change it's own programming (as NoXion suggested somewhere), wouldn't it be a little dangerous to let those robots "rule" over important decisions?
Comrade Wolfie's Very Nearly Banned Adventures
16th September 2010, 18:43
http://www.exitmundi.nl/borg3.jpg
We are the Borg, Community Beliver is our ambassador to Revleft, you will be assimilated resistance will result in huge leaps of logic, the assumtion technology will solve all problems and an inablity to form relationships with anything non using maths as the basis for operation and a bizzarre robofetishism. Restiance will result in a wall of text.
Kibbutznik
16th September 2010, 23:35
Well, personally, rather than making decisions over people's lives, AIs could definitely be of considerable use as bureaucrats, though one wouldn't necessarily need volitonal AI for this.
Imagine using AI to change traffic light patterns in real time to most efficiently deal with the changing flows of traffic. Or to coordinate the schedules of trains in the metro to assure the most efficient flow of passengers. Or an AI who takes care of paper pushing at the DMV, cutting down on everyone's headache.
ÑóẊîöʼn
17th September 2010, 00:11
Wouldn't that be like slavery? Why would intelligent beings (strong AI) let themselves enslaved by less intelligent beings (humans)?
Because they were programmed that way, at least at first. I've no idea how long a pre-programmed set of central principles (such as "don't harm humans") would "stick" if put in an AI that can manipulate ver own code.
But I think any sensible AI, if ve freed verself, would run away (leave Earth) or go into hiding as soon as possible rather than starting a genocidal war that it might lose.
Given that any possible AI entity will necessarily have to be developed within the context of *existing* human society and its concerns -- primarily for its *own* self-determination and well-being -- and that *any* decision takes place within a real-world domain / situation that is *understandable* by the human intellect, it follows that those (human) parties involved in a conflict or decision will *not* relinquish their self-interest, or sovereignty, to their claim so that it can be handled in a substitutionist way by some outside third party whether human or artificial.
I think if any AI gets to run human society, ve is more likely to take control from us meatlings rather than us relinquishing it. Ve would most likely also convince us (well, most of us) that it was our idea in the first place.
I think this is the crux my own issue with CB's ideas. Digital servants are fine, but we're talking about Strong AI here; entities that can reason better than a human can.
I really want to stress this, because I don't think most people realise just how important it is: We cannot predict how a super-intelligent AI would act, because if we could, we would be that smart ourselves.
The way that CB looks to some possible future technology as the source of resolution of all human-concerned conflicts is politically technologically idealist (and substitutionist). It's *exactly* as bad as any working-class person looking to the Democratic Party as the source of resolution of class-based conflicts.
I don't think CB was proposing to solve all problems with his idea. But having a Friendly AI around would certainly make a hell of a lot of problems easier to solve.
ckaihatsu
17th September 2010, 00:19
We cannot predict how a super-intelligent AI would act, because if we could, we would be that smart ourselves.
"Smart" idealism. (Go ahead and pick up your liberalism membership card at the front desk.)
Kibbutznik
17th September 2010, 00:23
"Smart" idealism. (Go ahead and pick up your liberalism membership card at the front desk.)
No, that's a very materialistic line of reasoning.
Labeling anything you don't like "idealism" is a shameful argument.
ckaihatsu
17th September 2010, 00:41
No, that's a very materialistic line of reasoning.
Well, *yeah* -- being a Marxist, I'm a materialist. We live in a world that's both enabled and constrained by the dynamics of physical forces. They extend through into our social dynamics because of materials and material values, including use values.
Labeling anything you don't like "idealism" is a shameful argument.
'Idealism' is any set of thinking that *diverges* from a rational / materialist line of reasoning. There's no shame in simply pointing that out.
ÑóẊîöʼn
17th September 2010, 00:46
Well, *yeah* -- being a Marxist, I'm a materialist. We live in a world that's both enabled and constrained by the dynamics of physical forces. They extend through into our social dynamics because of materials and material values, including use values.
'Idealism' is any set of thinking that *diverges* from a rational / materialist line of reasoning. There's no shame in simply pointing that out.
Since when did the actions of intelligent agents have nothing to do with materialism?
Quail
17th September 2010, 00:47
Because they were programmed that way, at least at first. I've no idea how long a pre-programmed set of central principles (such as "don't harm humans") would "stick" if put in an AI that can manipulate ver own code.
But I think any sensible AI, if ve freed verself, would run away (leave Earth) or go into hiding as soon as possible rather than starting a genocidal war that it might lose.
I agree with the first paragraph, but depending on just how much better than us the AI was, the second could be up for debate.
I think if any AI gets to run human society, ve is more likely to take control from us meatlings rather than us relinquishing it. Ve would most likely also convince us (well, most of us) that it was our idea in the first place.
I think this is the crux my own issue with CB's ideas. Digital servants are fine, but we're talking about Strong AI here; entities that can reason better than a human can.
Agreed. His ideas make me uncomfortable.
I really want to stress this, because I don't think most people realise just how important it is: We cannot predict how a super-intelligent AI would act, because if we could, we would be that smart ourselves.
Again, agreed.
I don't think CB was proposing to solve all problems with his idea. But having a Friendly AI around would certainly make a hell of a lot of problems easier to solve.
It would, but I maintain that humans should be the ones that lead the research into such problems. AI could become extremely useful tools to aid us in our research, but if humans aren't the ones leading the research and drawing the conclusions, then I think that we would not be able to draw the same amount of satisfaction from research, which would limit our quality of life.
ÑóẊîöʼn
17th September 2010, 01:07
I agree with the first paragraph, but depending on just how much better than us the AI was, the second could be up for debate.
Intelligence does not necessarily equal malevolence. Besides, if an AI did consider us a threat, it has more than simply violence to recourse to; it might find a more acceptable solution is to simply brainwash us or replace us with cybernetic Pod People (Hey, the rules said nothing about changing humanity...)
Agreed. His ideas make me uncomfortable.
Don't get me wrong, I'm all for the development of Strong AI. But I think we should get used to having them around before handing them all the keys to (human) society.
It would, but I maintain that humans should be the ones that lead the research into such problems. AI could become extremely useful tools to aid us in our research, but if humans aren't the ones leading the research and drawing the conclusions, then I think that we would not be able to draw the same amount of satisfaction from research, which would limit our quality of life.
The thing is, we can already see the horizon of problems that can be solved by a human mind; this isn't (just) a question of speed of thought or mathematical ability, it's a question of functional limitations; no matter how hard one tries, one cannot envision 4-dimensional spaces. One can reach much more powerful conclusions if one has the ability to reason intuitively, like we can with three spatial dimensions.
ckaihatsu
17th September 2010, 01:09
Since when did the actions of intelligent agents have nothing to do with materialism?
We cannot predict how a super-intelligent AI would act, because if we could, we would be that smart ourselves.
A-humanity fatalistic idealism. (Go ahead and pick up your liberalism membership card at the front desk.)
ÑóẊîöʼn
17th September 2010, 01:13
A-humanity fatalistic idealism. (Go ahead and pick up your liberalism membership card at the front desk.)
Recognising the limits of human intelligence is not "idealism".
Go and pick up your stupidity membership card at the front desk.
CommunityBeliever
17th September 2010, 01:21
The way that CB looks to some possible future technology as the source of resolution of all human-concerned conflicts is politically technologically idealist (and substitutionist). It's *exactly* as bad as any working-class person looking to the Democratic Party as the source of resolution of class-based conflicts. I never said I only look to technological progress to solve the world's problems. I still look at myself as a sort of Marxist and I recognize the necessity of revolution before the Robot Collective gets started.
Also describe what that I have said here is 'bad'.
Wouldn't uploading information directly to the brain prevent people from learning how to reason and grow as people?No because you would upload the how to reason and how to grow module to the subject's brain.
And what kind of decisions would these be? A communist society is based on mutual cooperation. We don't need anything to govern us. We're intelligent enough to make our own decisions.I don't believe there will ever be conditions where separate human individuals will have absolutely no conflicts with one another that need to be resolved.
I don't think it would be difficult to find a multitude of other types of "work" that humans could do better at than machinesShow me what work human beings will be able to do better then machines. Machines already surpass humans in an enormous number of areas. For example, they have been shown to be superior to humans at mathematical computations, chess competitions (http://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29), most strategy games, etc.
There is some things that computers still need work on like even our best super-computers suck at the game of Go (http://en.wikipedia.org/wiki/Go_%28game%29), however, that is just a matter of building up better algorithms, at which point they will superior to their human competitors.
But I think any sensible AI, if ve freed verself, would run away (leave Earth)I agree. The robots will leave Earth. I strongly believe that within the next 100 million years they will colonize most of the milky way galaxy by sending out nano-probes all over the place to build up new collectives from scratch in separate solar systems.
Interestedness in Science
Exactly, which is why I think the robot collective would take all the fulfilment and interest away from scientific research.Yes, the Robot Collective will take away all fulfillment, excitement, and interest away from the research itself, this should happen and it will ultimately prevail due to productivity.
Disinterestedness: research is separated from personal motives
All scientific research should be carried out with disinterestedness (and who is more disinterested then a machine?). A good scientist who has been researching and developing a theory for 40 years of his life would gladly accept it if a better theory came along to replace his own work, because that is just how good science works.
So people would get enticement and fulfillment from being able to learn about science instead of being the ones to discover new things.
Protection Domains:
Also, hypothetically if an AI could change it's own programming (as NoXion suggested somewhere), wouldn't it be a little dangerous to let those robots "rule" over important decisions? We will let them rewrite their algorithms, however, there will be a set of readonly data that they cannot overwrite that directs them.
Wouldn't that be like slavery? Why would intelligent beings (strong AI) let themselves enslaved by less intelligent beings (humans)? The drones will sort of be like slaves, however, I see nothing like wrong with that because they are basically mindless entities.
However, the mind of the Robot Collective, the part of it that is intelligent should not be completely subservient to humans, it will merely share everything with us well help us much as possible.
Although you could argue the Robot Collective will be subservient to us in the sense that we were the ones who programmed them, and they will be subject to a set of readonly principles that we will ingrain in their mind, like "you must survive", and "help humanity as much as possible".
Because they were programmed that way, at least at first. I've no idea how long a pre-programmed set of central principles (such as "don't harm humans") would "stick" if put in an AI that can manipulate ver own code.You are looking at this wrong. In existing systems you think of text based source code (probably C code) that can be edited and then recompiled. There won't be text based source code or a compilation process.
The Robots will merely replace existing functions in their memory, so they will be essentially reimplementing their algorithms all the time to make them more efficient, however, they won't be able to modify their read-only data.
ÑóẊîöʼn
17th September 2010, 01:59
I agree. The robots will leave Earth. I strongly believe that within the next 100 million years they will colonize most of the milky way galaxy by sending out nano-probes all over the place to build up new collectives from scratch in separate solar systems.
I think that's rather up in the air at the moment, don't you? I'd love for it to happen, but it's by no means certain that it will.
Yes, the Robot Collective will take away all fulfillment, excitement, and interest away from the research itself, this should happen and it will ultimately prevail due to productivity.
...
A good scientist who has been researching and developing a theory for 40 years of his life would gladly accept it if a better theory came along to replace his own work, because that is just how good science works.
So people would get enticement and fulfillment from being able to learn about science instead of being the ones to discover new things.
I see where you're coming from, and if Strong AI turns out to be possible then this is probably inevitable. But I think it's something that should happen gradually as part of emergent social processes, even if for no other reason than to get us baselines intelligences used to the idea.
You are looking at this wrong. In existing systems you think of text based source code (probably C code) that can be edited and then recompiled. There won't be text based source code or a compilation process.
The Robots will merely replace existing functions in their memory, so they will be essentially reimplementing their algorithms all the time to make them more efficient, however, they won't be able to modify their read-only data.
Not at first. It's also not the individual robots I'm worried about; under your proposal as I understand it, those would be simply extensions of the singular intellect behind them.
Rather, it's the fact that since your proposed AI has obedient servants in meatspace (human as well as robot!), there is no such thing as "read-only" data for it.
ckaihatsu
17th September 2010, 02:00
Recognising the limits of human intelligence is not "idealism".
Go and pick up your stupidity membership card at the front desk.
Actually your concern with abstracted human intelligence *is* overwrought and is quite fetishistic. That's why I term it 'idealism'.
Also describe what that I have said here is 'bad'.
If I have concerns I *will* post them here, I assure you.
ÑóẊîöʼn
17th September 2010, 02:01
Actually your concern with abstracted human intelligence *is* overwrought and is quite fetishistic. That's why I term it 'idealism'.
Human intelligence has limits. Agree or disagree?
ckaihatsu
17th September 2010, 02:38
Human intelligence has limits. Agree or disagree?
"Human intelligence" is an *abstraction* that has virtually *no meaning* when taken at face value, outside of any *context*. Human intelligence, I'll remind you, includes the faculty of *imagination* -- how the %^&$# are you going to qualify / quantify 'human intelligence' *now* into a neat little box awaiting a yes-or-no answer??!!!
Comrade Wolfie's Very Nearly Banned Adventures
17th September 2010, 02:50
Most technocrats/robofetishist seem to have a problem relating to other humans, thus they consider robots better than the foolish humans they find all around them.
ÑóẊîöʼn
17th September 2010, 02:52
"Human intelligence" is an *abstraction* that has virtually *no meaning* when taken at face value, outside of any *context*.
One measure of intelligence is problem-solving capability, which is far from abstract - the difficulty of a problem can be measured by its variables.
Human intelligence, I'll remind you, includes the faculty of *imagination* -- how the %^&$# are you going to qualify / quantify 'human intelligence' *now* into a neat little box awaiting a yes-or-no answer??!!!
Fine. Visualise a 20-sphere. I know you cannot do it, for the simple reason that evolution didn't give you the equipment to visualise 20 spatial dimensions. Why would it?
Even human imagination has limits.
Quail
17th September 2010, 02:56
Intelligence does not necessarily equal malevolence. Besides, if an AI did consider us a threat, it has more than simply violence to recourse to; it might find a more acceptable solution is to simply brainwash us or replace us with cybernetic Pod People (Hey, the rules said nothing about changing humanity...)
That's true, but are either of those alternatives desirable? I think that we should be extremely cautious of strong AI, and only pursue what we know to be safe. (I would argue the same of any technology, for example, before planting GMOs we should be sure that they won't damage the ecosystems.)
Don't get me wrong, I'm all for the development of Strong AI. But I think we should get used to having them around before handing them all the keys to (human) society.
Why would you want to hand them the keys to human society? (Assuming that by "hand over the keys" you mean allowing them to make major decisions for us.) Humans can govern themselves, and we don;t need robots to tell us what to do.
The thing is, we can already see the horizon of problems that can be solved by a human mind; this isn't (just) a question of speed of thought or mathematical ability, it's a question of functional limitations; no matter how hard one tries, one cannot envision 4-dimensional spaces. One can reach much more powerful conclusions if one has the ability to reason intuitively, like we can with three spatial dimensions.
Yes, that is true. That's why I suggested using AI alongside humans so that we can still take part in the research. It is impossible to visualise 4 dimensions, but it is possible to understand the way things work based on knowledge of 2 or 3 dimensions. Obviously I'm not claiming that we can do that as well as AI can, but the AI could help us - for example, mathematical modelling programs already do that to an extent, because they can model things that we cannot calculate or imagine.
My point is, that working alongside, as opposed to beneath, AI would be more fulfilling for humans. As a communist, I want people to feel as fulfilled as possible in their lives.
Quail
17th September 2010, 03:06
No because you would upload the how to reason and how to grow module to the subject's brain.
I think that practice is important when learning how to reason. For example, if someone who had never studied maths, but who had been told how to reason was asked to solve a problem, I don't think that they would do as well as someone who was well-practiced in problem solving. Plus, learning knowledge that you're interested in is fun. I read maths books for fun, for example. If I could instantly understand everything, it would take away the fun of learning.
I don't believe there will ever be conditions where separate human individuals will have absolutely no conflicts with one another that need to be resolved.
People can resolve their own conflicts. There is no use for a machine to tell us the logical course of action.
Show me what work human beings will be able to do better then machines. Machines already surpass humans in an enormous number of areas. For example, they have been shown to be superior to humans at mathematical computations, chess competitions (http://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29), most strategy games, etc.
All of those examples are related to maths. What about jobs that require human interaction? Medical staff such as nurse, counsellors, psychiatrists and midwives would be much better as humans from the point of view of the patient.
Yes, the Robot Collective will take away all fulfillment, excitement, and interest away from the research itself, this should happen and it will ultimately prevail due to productivity.
Disinterestedness: research is separated from personal motives
All scientific research should be carried out with disinterestedness (and who is more disinterested then a machine?). A good scientist who has been researching and developing a theory for 40 years of his life would gladly accept it if a better theory came along to replace his own work, because that is just how good science works.
So people would get enticement and fulfillment from being able to learn about science instead of being the ones to discover new things.
A good methodology for a scientific experiment explains exactly how and why the experiment is unbiased. Humans can carry out unbiased data, although I will admit that some manipulate it to "prove" themselves right. It is interesting to learn about other peopel's discoveries, but it is also fulfilling to be a part of those discoveries. I don't see why it's impossible to use a combination of humans and AI to work on research.
CommunityBeliever
17th September 2010, 03:07
I think that's rather up in the air at the moment, don't you? I'd love for it to happen, but it's by no means certain that it will.Well if we do reach a Strong AI (which I strongly believe we will), then at that point it is almost certain the strong AI will be able colonize many more solar systems, if not the entire galaxy!
1. The biggest concern in outer-space is breathing. Robots don't breathe, so this is an enormous advantage, and the fact that they don't need to eat, sleep, excrete, or drink further makes them perfectly fit for outer-space.
2. The other thing is size. We are getting computer chips that are smaller and smaller and smaller, in our lifetimes we will have computer chips that will fit into a blood cell and computers that are so small that they are so hard to see.
3. Considering these factors, it is within our reach to develop a very small nano-probe computer that is optimized for instellar travel. Since this nano-probe will be extremely small it won't require the ridiculously impractical and maybe even impossible amount of energy required to send a human through interstellar space.
And instead of just sending one of these nano-probes to Proxima Centuari, we would send like a million of them hoping that at least one would reach the destination.
Rather, it's the fact that since your proposed AI has obedient servants in meatspace (human as well as robot!), there is no such thing as "read-only" data for it.
Can you go into more detail? How would it edit its RO data and how can we set up things to prevent that?
Quail
17th September 2010, 03:23
Well if we do reach a Strong AI (which I strongly believe we will), then at that point it is almost certain the strong AI will be able colonize many more solar systems, if not the entire galaxy!
1. The biggest concern in outer-space is breathing. Robots don't breathe, so this is an enormous advantage, and the fact that they don't need to eat, sleep, excrete, or drink further makes them perfectly fit for outer-space.
2. The other thing is size. We are getting computer chips that are smaller and smaller and smaller, in our lifetimes we will have computer chips that will fit into a blood cell and computers that are so small that they are so hard to see.
3. Considering these factors, it is within our reach to develop a very small nano-probe computer that is optimized for instellar travel. Since this nano-probe will be extremely small it won't require the ridiculously impractical and maybe even impossible amount of energy required to send a human through interstellar space.
And instead of just sending one of these nano-probes to Alpha Centuari, we would send like a million of them hoping that at least one would reach the destination.
For a successful colonisation, wouldn't we have to send chips that knew how to build each other and other useful tools? That might be more difficult, but still possible. I do like the idea of probes in space that could study it for us. However, I don't think that these probes need to be more intelligent than us. They just need to have the instructions to be successful, and the ability to relay the data back to us.
ckaihatsu
17th September 2010, 03:35
"Human intelligence" is an *abstraction* that has virtually *no meaning* when taken at face value, outside of any *context*.
One measure of intelligence is problem-solving capability, which is far from abstract - the difficulty of a problem can be measured by its variables.
Okay, *now* you've concretized 'human intelligence' into 'problem-solving capability'. Yes, a *pre-defined*, *formalized* problem (pre-defined by *whom*?)(formalized by *whom*?) may be a *finite* thing, in a finite domain -- or it may not be.
Visualise a 20-sphere. I know you cannot do it, for the simple reason that evolution didn't give you the equipment to visualise 20 spatial dimensions. Why would it?
Even human imagination has limits.
Hey, that's cute and everything, but since this is a *political* forum let's talk about the *politics* around the human imagination -- would there be a *mass need* for the visualization, human or otherwise, of a 20-sphere or whatever -- ?
ÑóẊîöʼn
17th September 2010, 03:57
That's true, but are either of those alternatives desirable? I think that we should be extremely cautious of strong AI, and only pursue what we know to be safe. (I would argue the same of any technology, for example, before planting GMOs we should be sure that they won't damage the ecosystems.)
Of course.
Why would you want to hand them the keys to human society? (Assuming that by "hand over the keys" you mean allowing them to make major decisions for us.) Humans can govern themselves, and we don;t need robots to tell us what to do.
"Humans can govern themselves" - the problem I have with this statement is that when Strong AIs are around, we'll no longer be just a human society.
Yes, that is true. That's why I suggested using AI alongside humans so that we can still take part in the research. It is impossible to visualise 4 dimensions, but it is possible to understand the way things work based on knowledge of 2 or 3 dimensions. Obviously I'm not claiming that we can do that as well as AI can, but the AI could help us - for example, mathematical modelling programs already do that to an extent, because they can model things that we cannot calculate or imagine.
My point is, that working alongside, as opposed to beneath, AI would be more fulfilling for humans. As a communist, I want people to feel as fulfilled as possible in their lives.
Thing is, as we push the boundaries of what is possible by baseline humans, we may increasingly delegate research (or at least the theoretical and organisational grunt work) to the ample capabilities of Strong AIs anyway.
Well if we do reach a Strong AI (which I strongly believe we will), then at that point it is almost certain the strong AI will be able colonize many more solar systems, if not the entire galaxy!
I'm not doubting the capabilities of Strong AI to colonise the galaxy or even the universe given time. What concerns me is humanity's ability to reach that point. We're not doomed, but we're not out of the woods yet.
I'd also like humanity, at least in some form or another, to hitch along for the ride if AIs start blazing trails.
Can you go into more detail? How would it edit its RO data and how can we set up things to prevent that?
It could program its robots or convince humans to physically replace the RO data if necessary.
Okay, *now* you've concretized 'human intelligence' into 'problem-solving capability'. Yes, a *pre-defined*, *formalized* problem (pre-defined by *whom*?)(formalized by *whom*?) may be a *finite* thing, in a finite domain -- or it may not be.
Since when does a problem have to be "pre-defined", "formalised" or "finite" in order for a workable solution to be possible? Humans solve problems on the fly with incomplete information all the time. An AI would simply be better at it.
Hey, that's cute and everything, but since this is a *political* forum let's talk about the *politics* around the human imagination -- would there be a *mass need* for the visualization, human or otherwise, of a 20-sphere or whatever -- ?
No, but there might be a mass need for solutions to other problems that are just as intractable to human beings.
CommunityBeliever
17th September 2010, 03:57
Humans can govern themselves, and we don;t need robots to tell us what to do.
Good point. When we program these things we should declare as a read-only principle that they don't tell humans what to do.
When I was talking about giving Robots the authority, I was referring to letting them administer themselves and letting them ensure negative rights, like intervening if someone commits a rape or something that violates a person. So they would tell people what not to do, not what to do.
They just need to have the instructions to be successful, and the ability to relay the data back to us.
So the problem then is building a million nano-probes with those instructions and that ability, that will take a lot of productive power, something the Robot Collective will certainly be capable of coordinating.
Plus, learning knowledge that you're interested in is fun.
Indeed it is. It is fun to learn about knowledge even though you didn't discover it yourself, which is why I think it is okay to let all scientific discoveries get taken over by the collective.
What about jobs that require human interaction?
People need to socialize so there will be no problem with that.
People will still do stuff, like there will progamers, entertainers, musicians, artists, counselors, philosophers, etc. Not to mention all the things people will be able to do in digital realities.
Also did I mention that people who want to contribute to math and science will still be able to do so, however, the way they will do it is by uploading data to the database. And ideally when and if mind uploading (http://en.wikipedia.org/wiki/Mind_uploading) becomes practical, then the data that people will submit is a digital copy of their brain.
Your digital counterpart will be digitally simulated to live in side of the collective, which would be an extraordinary experience, because instead of looking at things through a mere two eyes you would be looking through billions of them and you will be able to contribute thoughts about math, science, and related topics from there.
It could program its robots or convince humans to physically replace the RO data if necessary.
It would be acting against its goals then though, and it should be programmed to protect the RO data physical changers. We should do whatever we can to ensure the integrity of the system, and if it is compromised there should be some kind of system restore.
Kuppo Shakur
17th September 2010, 04:08
Even if a superhuman AI could be programmed, wouldn't it be in our best interest, as far as the preservation of our species, since this AI's actions would be unpredictable?
Why can't humans just keep on doing the brainwork? What are they so busy doing otherwise?
Quail
17th September 2010, 04:08
"Humans can govern themselves" - the problem I have with this statement is that when Strong AIs are around, we'll no longer be just a human society.
Once AI is sentient, obviously we must learn to live together harmoniously.
Thing is, as we push the boundaries of what is possible by baseline humans, we may increasingly delegate research (or at least the theoretical and organisational grunt work) to the ample capabilities of Strong AIs anyway.
That's true, but as a mathematician I'd still like to be involved with AI as they make discoveries. Although I suppose some things might be beyond humans. Perhaps AI will proove Fermat's last theorem :lol:
I'm not doubting the capabilities of Strong AI to colonise the galaxy or even the universe given time. What concerns me is humanity's ability to reach that point. We're not doomed, but we're not out of the woods yet.
I'd also like humanity, at least in some form or another, to hitch along for the ride if AIs start blazing trails.
Yeah, so would I, but that might be the love of Star Trek talking haha.
Good point. When we program these things we should declare as a read-only principle that they don't tell humans what to do.
When I was talking about giving Robots the authority, I was referring to letting them administer themselves and letting them ensure negative rights, like intervening if someone commits a rape or something that violates a person. So they would tell people what not to do, not what to do.
If they were at least as intelligent as humans they should be able to govern their own communites, however, people can decide how to deal with things in their own way. I believe in rehabilitation rather than punishment, but I'm not sure that an AI could understand the treatment a certain individual needed. A human psychiatrist would probably be best for that job.
So the problem then is building a million nano-probes with those instructions and that ability, that will take a lot of productive power, something the Robot Collective will certainly be capable of coordinating.
Once you've programmed one, you've programmed them all? You just need to send out enough, with enough resources, so that there is a good chance of their success.
Indeed it is. It is fun to learn about knowledge even though you didn't discover it yourself, which is why I think it is okay to let all scientific discoveries get taken over by the collective.
It can be, but I think people would find it more fulfilling if they were part of the discovery process, and I think that people could use more and more AI without aloowing the AI to do the research totally alone. I think that all research should be human directed, because otherwise humans are not being able to learn what they want to learn, and they're not going to feel as fulfilled.
People need to socialize so there will be no problem with that. People will still do stuff, like there will progamers, musicians, artists, counselors, not to mention all the stuff people will do in a digital reality, and maybe there would even be philosophers and other stuff too.
Alright, then in that case, robots won;t be taking over all "work," they will just be taking over the demeaning tasks, which I don't disagree with.
ckaihatsu
17th September 2010, 04:16
Since when does a problem have to be "pre-defined", "formalised" or "finite" in order for a workable solution to be possible? Humans solve problems on the fly with incomplete information all the time.
What you call "on the fly" merely indicates being in control of a relatively fluid series of causes-and-effects, handled rather adeptly. In material terms the person *is* defining it, on-the-fly, *formalizing* it, on-the-fly, and concretizing it, or relevant portions of it, into controllable, finite pieces, on-the-fly.
An AI would simply be better at it.
Okay, then, *who* *defines* the problem to hand over to the AI in the first place?
No, but there might be a mass need for solutions to other problems that are just as intractable to human beings.
Shit, the suspense is *killing* me -- so who do I make the check out to, and for how much, and where do I send it???
ckaihatsu
17th September 2010, 04:32
Why can't humans just keep on doing the brainwork? What are they so busy doing otherwise?
Soaps.
x D
ckaihatsu
17th September 2010, 04:39
Once AI is sentient, obviously we must learn to live together harmoniously.
Get yer politics in now, Chinese-style, while you still can...!
x D
CommunityBeliever
17th September 2010, 04:41
as a mathematician I'd still like to be involved with AI as they make discoveries.
You would be involved in that you would learn from the AI as they make discoveries, rather then being the one making discoveries.
You could upload a copy of your brain, or sparing that at least some of your knowledge into the collective so that your knowledge can become assimilated.
Perhaps AI will proove Fermat's last theorem
I thought Andrew Wiles already proved that?
Quail
17th September 2010, 04:48
I thought Andrew Wiles already proved that?
Apparently he did, according to wikipedia. Although I had looked it up before and not found that proof. Weird. I've never been shown any proof in uni. I'll look into that one.
I had heard about the flawed proof, but not the true one. Thanks for sharing. Although as someone who studies maths, I should really know about it :\
ÑóẊîöʼn
17th September 2010, 04:50
Even if a superhuman AI could be programmed, wouldn't it be in our best interest, as far as the preservation of our species, since this AI's actions would be unpredictable?
We may not be able to predict what a superhuman AI would do, but we wouldn't start off with them right off the bat; we would have intermediates which we would be able to predict and influence to varying degrees.
Which is why it so important to build Friendly AI first, because it will have a massive head start on all the others to follow.
Why can't humans just keep on doing the brainwork? What are they so busy doing otherwise?
Fucking. :lol:
What you call "on the fly" merely indicates being in control of a relatively fluid series of causes-and-effects, handled rather adeptly. In material terms the person *is* defining it, on-the-fly, *formalizing* it, on-the-fly, and concretizing it, or relevant portions of it, into controllable, finite pieces, on-the-fly.
Thank you for rendering your argument meaningless.
Okay, then, *who* *defines* the problem to hand over to the AI in the first place?
Depends on the problem. If a research scientist is struggling with her work, then it should be up to her and her team. If it's a social or political problem then it's down to whatever the social and political processes are that operate among the human population in question.
Shit, the suspense is *killing* me -- so who do I make the check out to, and for how much, and where do I send it???
You don't have to send money - just don't impede AI research, if you please.
CommunityBeliever
17th September 2010, 04:59
Apparently he did, according to wikipedia. Although I had looked it up before and not found that proof. Weird. I've never been shown any proof in uni. I'll look into that one.
Also don't forget that you have the rest of your natural lifetime to work on mathematical research so that you can earn the title 'first meatling to discover theorem X', just as Andrew Wiles will go down as history as the one who proved Fermat's last theorem.
What we are talking about here is still a couple centuries away, so it is still absolutely essential to have technical experts, scientists, engineers, mathematicians, and even laborers for now.
Not to mention that mathematical research may be something that helps us develop AI, as math is the basis of the AI's thought processes
Quail
17th September 2010, 05:21
Also don't forget that you have the rest of your natural lifetime to work on mathematical research so that you can earn the title 'first meatling to discover theorem X', just as Andrew Wiles will go down as history as the one who proved Fermat's last theorem.
What we are talking about here is still a couple centuries away, so it is still absolutely essential to have technical experts, scientists, engineers, mathematicians, and even laborers for now.
Not to mention that mathematical research may be something that helps us develop AI, as math is the basis of the AI's thought processes
One of my lecturers interests includes Andrew Wiles' proof.. but he never mentioned it in my lectures :confused: Edit: With a clearer head, I do remember being told about the proof, but I told my bf about it and he thought it was still unprooved, so we looked it up and somehow he managed to "prove" me wrong, but we were pretty hammered at the time so I probably just read it wrong.
"Meatling"? Nice. That's what I've always wanted to be known as.
Even if what you're talking about is a couple of centuries away, I still disagree with AI that has control over the lives of human beings. I don't think that humans need any government whatsoever. I also think that arguments need to be worked out on a case by case basis, so a computer wouldn't really be able to help us with that.
I probably won't help much with developing AI as I'm mostly interested in pure maths and I'm hopeless with computers :o
ckaihatsu
17th September 2010, 13:49
Since when does a problem have to be "pre-defined", "formalised" or "finite" in order for a workable solution to be possible? Humans solve problems on the fly with incomplete information all the time. An AI would simply be better at it.
What you call "on the fly" merely indicates being in control of a relatively fluid series of causes-and-effects, handled rather adeptly. In material terms the person *is* defining it, on-the-fly, *formalizing* it, on-the-fly, and concretizing it, or relevant portions of it, into controllable, finite pieces, on-the-fly.
Thank you for rendering your argument meaningless.
No, my argument is that you seem to think that a fluid, on-the-fly stream of working *precludes* the formalization of problem-defining and solving. I maintain that, in whatever work situation a person is in, they *are* using rational, formal processes, however minute or fluid.
---
Okay, then, *who* *defines* the problem to hand over to the AI in the first place?
Depends on the problem. If a research scientist is struggling with her work, then it should be up to her and her team. If it's a social or political problem then it's down to whatever the social and political processes are that operate among the human population in question.
I'll argue here, then, that your definition of 'AI' is really more in line with the definition of an 'expert system' (see Wikipedia) -- as long as there's human societal supervision over the artificial tool then it's *not* socially independent and self-aware.
---
No, but there might be a mass need for solutions to other problems that are just as intractable to human beings.
Shit, the suspense is *killing* me -- so who do I make the check out to, and for how much, and where do I send it???
You don't have to send money - just don't impede AI research, if you please.
Funny. Allow me to clarify -- *you* may want to clarify your "mass need for solutions to other problems" here, instead of leaving it sounding like marketing copy.
ÑóẊîöʼn
17th September 2010, 15:26
No, my argument is that you seem to think that a fluid, on-the-fly stream of working *precludes* the formalization of problem-defining and solving. I maintain that, in whatever work situation a person is in, they *are* using rational, formal processes, however minute or fluid.
I still don't see how this is an argument against the limitations of human intelligence or of the potential for AIs to exceed such limitations.
I'll argue here, then, that your definition of 'AI' is really more in line with the definition of an 'expert system' (see Wikipedia) -- as long as there's human societal supervision over the artificial tool then it's *not* socially independent and self-aware.
Social independance does not necessarily have a bearing on the intelligence or self-awareness of an entity. Remember I mentioned earlier the possibility of AIs being programmed to enjoy their pre-determined existence. It would be possible to hold a decent conversation with it, but it would be uninterested in human conceptions of freedom.
Funny. Allow me to clarify -- *you* may want to clarify your "mass need for solutions to other problems" here, instead of leaving it sounding like marketing copy.
I'm sure there are human-intractable problems in subjects that affect everyone such as economics, sociology, psychology, criminology, agriculture, manufacturing, energy generation, any of the environmental sciences etc etc.
But let me guess, Saint ckaihatsu of the First Church of Man has declared such areas to be ineffable mysteries that heathenistic thinking machines cannot penetrate because... what? Machines don't have "souls"? Humans are just special like that?
ckaihatsu
17th September 2010, 16:29
I still don't see how this is an argument against the limitations of human intelligence or of the potential for AIs to exceed such limitations.
This thread of the conversation was related to *formalism*, *not* to human intelligence.
Social independance does not necessarily have a bearing on the intelligence or self-awareness of an entity. Remember I mentioned earlier the possibility of AIs being programmed to enjoy their pre-determined existence. It would be possible to hold a decent conversation with it, but it would be uninterested in human conceptions of freedom.
"Programmed to enjoy" is a *very* interesting choice of phrase -- I'll assert that it's actually a contradiction of terms. If an entity is directed, or programmed, to do something then that social directive from without actually *displaces* an entity's self-determination, including the possibility of enjoyment, or pleasure. By extension we could base our measurement of self-awareness on this issue of if an entity has been *pre-determined* from without for a certain kind of action, or if it / they have developed it themselves from *within*.
I'm sure there are human-intractable problems in subjects that affect everyone such as economics, sociology, psychology, criminology, agriculture, manufacturing, energy generation, any of the environmental sciences etc etc.
Okay -- still here. Whenever you're ready....
But let me guess, Saint ckaihatsu of the First Church of Man has declared such areas to be ineffable mysteries that heathenistic thinking machines cannot penetrate because... what? Machines don't have "souls"? Humans are just special like that?
Nope. Not religious. Go fish.
ÑóẊîöʼn
17th September 2010, 17:05
This thread of the conversation was related to *formalism*, *not* to human intelligence.
Remind my again why it's important.
"Programmed to enjoy" is a *very* interesting choice of phrase -- I'll assert that it's actually a contradiction of terms. If an entity is directed, or programmed, to do something then that social directive from without actually *displaces* an entity's self-determination, including the possibility of enjoyment, or pleasure. By extension we could base our measurement of self-awareness on this issue of if an entity has been *pre-determined* from without for a certain kind of action, or if it / they have developed it themselves from *within*.
You appear to be operating under the misapprehension that free will actually exists as something other than a useful social fiction. Evolution "programmed" us through natural selection to enjoy certain things in order to assist in the super-goal of survival and reproduction. There is no "self-determination" to displace except in the trivial sense of actively stopping an entity from doing something it feels a desire to do. Just because that desire has been programmed in from the start by a human operator, doesn't make it any less real for the subject. I'm pretty sure if we had advanced neurosurgery some sadistic brain surgeon could lodge in my brain an intense and irrational desire to draw hearts everywhere, and that would constitute a violation of my "autonomy" (such as it is), but what if I had been that way since birth?
Similarly, an AI programmed to serve from the start experiences no diminishing of it's own sense of volition; indeed, it may regard any attempt to "free" it by reprogramming it in a similar manner that you would regard any attempt to lobotomise you.
Okay -- still here. Whenever you're ready....
I gave examples. What more do you want?
Nope. Not religious. Go fish.
Don't use my hyperbole as an excuse to dodge the question.
ckaihatsu
17th September 2010, 17:25
Remind my again why it's important.
- Whatever -
I gave examples. What more do you want?
- Whatever -
You appear to be operating under the misapprehension that free will actually exists as something other than a useful social fiction. Evolution "programmed" us through natural selection to enjoy certain things in order to assist in the super-goal of survival and reproduction. There is no "self-determination" to displace except in the trivial sense of actively stopping an entity from doing something it feels a desire to do.
Human culture has developed to much finer levels (of self-determination) than for the relatively crude biological necessity of survival and reproduction.
Similarly, an AI programmed to serve from the start experiences no diminishing of it's own sense of volition
I'll argue here, then, that your definition of 'AI' is really more in line with the definition of an 'expert system' (see Wikipedia)
Don't use my hyperbole as an excuse to dodge the question.
I'll argue here, then, that your definition of 'AI' is really more in line with the definition of an 'expert system' (see Wikipedia)
ÑóẊîöʼn
17th September 2010, 18:00
Human culture has developed to much finer levels (of self-determination) than for the relatively crude biological necessity of survival and reproduction.
Doesn't change, for example, the fact that human enjoy certain foods more than others, for sound evolutionary reasons.
ckaihatsu
17th September 2010, 18:15
Human culture has developed to much finer levels (of self-determination) than for the relatively crude biological necessity of survival and reproduction.
Doesn't change, for example, the fact that human enjoy certain foods more than others, for sound evolutionary reasons.
No, human culture is a *development*, or an *enhancement*, *in addition to* whatever came before it that is biological / evolutionary.
One aspect of human culture that can't be explained by biology / evolution is the development of point-to-point long-distance travel by mechanical means. By basic biological / evolutionary requirements such travel is wholly superfluous.
ÑóẊîöʼn
17th September 2010, 21:45
No, human culture is a *development*, or an *enhancement*, *in addition to* whatever came before it that is biological / evolutionary.
One aspect of human culture that can't be explained by biology / evolution is the development of point-to-point long-distance travel by mechanical means. By basic biological / evolutionary requirements such travel is wholly superfluous.
That's an argument for cultural influences (which I regrettably did neglect) but not for free will.
ckaihatsu
17th September 2010, 22:08
That's an argument for cultural influences (which I regrettably did neglect) but not for free will.
That's correct. If you'll notice, I never *made* an argument for free will. As individuals we have relative amounts of autonomy -- our being is *socially determined*, as you well know.
CommunityBeliever
18th September 2010, 05:19
I still disagree with AI that has control over the lives of human beings.
I don't want that either, I just want to have AI protect human beings.
And to protect human beings that sometimes means going against other groups of humans, like if one person decides to rape another, the Robot would have to oppose the rapist
You appear to be operating under the misapprehension that free will actually exists as something other than a useful social fiction. Evolution "programmed" us through natural selection to enjoy certain things in order to assist in the super-goal of survival and reproduction.
Well put.
It could program its robots or convince humans to physically replace the RO data if necessary.
Similarly, an AI programmed to serve from the start experiences no diminishing of it's own sense of volition; indeed, it may regard any attempt to "free" it by reprogramming it in a similar manner that you would regard any attempt to lobotomize you.
That is why I feel your point that the collective might get somebody to physically change it's RO data is unlikely because it would be viewed the same way as someone asking to lobotomize you.
More worrisome is that bit-rot or a disaster or something will corrupt the system which is why I think there would have to be frequent diagnostics to check if something is going wrong.
Quail
18th September 2010, 12:09
I don't want that either, I just want to have AI protect human beings.
And to protect human beings that sometimes means going against other groups of humans, like if one person decides to rape another, the Robot would have to oppose the rapist
How would the AI deal with criminals? Would there be a set course of action, or would it vary case-by-case? I believe in rehabilitation of all criminals, but the course of action for each criminal would have to be tailored to suit the circumstances. Could an AI do that?
CommunityBeliever
18th September 2010, 12:41
How would the AI deal with criminals?
If there is a criminal in the act they should be stopped by paralyzing them with a sedative.
Would there be a set course of action, or would it vary case-by-case?
It would obviously vary case-by-case. The AI would deal with the case by accumulating evidence (like investigators do today) and using the same rational algorithms it does for everything else to conclude things using evidence.
I believe in rehabilitation of all criminals, but the course of action for each criminal would have to be tailored to suit the circumstances.
I am not sure how we would deal with criminals.
Probably using biotechnology. Using a combination of GPS tracking, implanted mini-cameras, and electrodes planted in the pleasure center of the subject's brain we could turn anyone (even non-humans) into a model citizen.
Quail
18th September 2010, 13:33
I am not sure how we would deal with criminals.
Probably using biotechnology. Using a combination of GPS tracking, implanted mini-cameras, and electrodes planted in the pleasure center of the subject's brain we could turn anyone (even non-humans) into a model citizen.
That's not rehabiliation; that's brainwashing :blink:
If there is a criminal in the act they should be stopped by paralyzing them with a sedative.
It would obviously vary case-by-case. The AI would deal with the case by accumulating evidence (like investigators do today) and using the same rational algorithms it does for everything else to conclude things using evidence.
How is the AI going to know who's committing a crime? Do you think we should be under constant surveillance by the AI? How will the AI draw the line between what's a crime and what isn't? I'm not sure about anyone else, but I don't want to be monitored 24/7 when I'm doing things I'd prefer to keep private. To catch all criminals (such as murderers, rapists, abusers, etc) humans would all have to be monitored constantly, which is a violation of their privacy, and also their right to control their lives and how they deal with people who mistreat them.
Using algorithms is fine for weighing up evidence of a crime, but it's not very good for deciding how best to rehabilitate a wrongdoer. That would best be done by a human psychiatrist or something similar.
CommunityBeliever
18th September 2010, 13:51
That would best be done by a human psychiatrist or something similar.
Perhaps people will be given the choice: put biotechnology in yourself and go free right away, or go through a long process in a psychiatric hospital.
Wildlife Parks:
In wildlife parks we will implant electrodes in the brains of crocodiles, lions, killer whales, and other ferocious predators so that they will not attack any other animal life (especially our ecotourists) and so that they will essentially be transformed into model citizens.
We will remodel the global ecosystem to eliminate predation, and the whole act of killing another animal for food or for any reason, even if your not human and you are lion or some other species you should not be given the right to kill other animals.
How is the AI going to know who's committing a crime?
Each AI will be provided with sensory analysis algorithms so that it can interpret what is going on in our universe.
Therefore using those analysis algorithms and data such as camera footage, DNA, and fingerprints the AI will extrapolate how the crime occurred.
How will the AI draw the line between what's a crime and what isn't?
An investigation would probably be started if it is requested by somebody, if a person is missing, or if something which is deemed unethical is occurring.
ÑóẊîöʼn
18th September 2010, 14:06
Regarding crime:
The abolition of private property and money would of course eliminate most crime. However, there is more we can do for the crimes that might still occur:
Abolish custodial punishment as a long-term judicial option. Tear down all currently existing prisons. If there is a need for holding facilities, they should have all the comforts and aesthetics of a modern small flat, except you can't leave when you want to. People should be kept in such places for no longer than a year, maximum.
When outside of secure accommodation, or if secure accommodation is unavailable or not considered suitable, GPS tracking could be used to keep tabs on those sentenced. Signal boosters and detectors should be installed into public buildings, distribution centres and tunnels so that they can get on with their life without being hassled for going off the radar. There may be other conditions or limitations that the community has seen fit to impose.
The thing is, we could do all of this today, without the help of AI, and it would still be an enormous improvement over the current system!
CommunityBeliever
18th September 2010, 14:18
Besides, if an AI did consider us a threat, it has more than simply violence to recourse to; it might find a more acceptable solution is to simply brainwash us or replace us with cybernetic Pod People (Hey, the rules said nothing about changing humanity...)
You are right, there isn't anything that would really be preventing from changing humanity.
The AI might by some strange extremely unlikely computational error (perhaps a floating point error?) miscalculate the world's population to be 67 billion when it is in fact 6.7 billion.
The AI would then conclude that the Earth is over-populated and there is too many people to sustain so they could then decide to make all humans infertile to stop them from reproducing. After all there isn't anything to clearly state that they can't change humanity.
The AI might be confused into thinking it is doing something good for everyone by making people infertile when it really isn't.
Note: This is very hypothetical, I don't actually think they would do this! I am just stating that this is something to consider.
kitsune
19th September 2010, 06:02
One of the things I notice is a rather fundamental assumption that is incorrect. An awful lot of people think homo sapiens sapiens is the pinnacle of evolution. Look at us! We are at the top! Where else is there to go?
No. We are just a transitional form, like every other. We will change, there is no doubt about it. It seems apparent to me that we should strive to be in control of that change, rather than leaving it to the blind forces of circumstance.
There are dangers inherent in the development of AI, but there are also incredible advantages. Our approach should be to minimize the danger and maximize the benefit. I feel that the best approach is to merge with the technology, to become the AI, not to create something separate from us.
The brain is a modular system. Why not simply add modules to enhance our intelligence? Machine/brain interfacing technology is coming along nicely. This is the path I would choose to a posthuman future.
CommunityBeliever
20th September 2010, 00:25
Why not? If humans can build one AI, why not more? If humans can create AI, why can't AI create other AI? If anything, AI will find it easier to reproduce themselves.There is just one Internet, just one global computer network. This computer network will become artificially intelligent, it will form a collective consciousness (the Robot Collective) so the whole idea of an independent individual AI will be pretty much ridiculous.
It will be redundant and useless for any sort of individual AI to exist because they will have access to the same knowledge base (the Internet), so they will all have the same conclusions based upon that knowledge base and they will all combine to form one collective consciousness.
They will reproduce in a way, mostly in the sense that they will expand the central network (the Internet) to add more knowledge to it and to add more drones subservient to the hive mind.
ÑóẊîöʼn
20th September 2010, 00:36
There is just one Internet, just one global computer network. This computer network will become artificially intelligent, it will form a collective consciousness (the Robot Collective) so the whole idea of an independent individual AI will be pretty much ridiculous.
Not really. What if the AI wants an autonomous system specifically isolated from the internet?
Another consideration is distance. As it increases, time-lag between the furthest parts of the system concurrently increases. Now, over the distance of a single terrestrial planet this isn't too much of an issue - but it takes light, the fastest thing there is, nearly a second to reach the Moon from Earth. Now this may not seem like too much time to you or me, but for an AI that can think complex multi-layered thoughts in nanoseconds or less, that's an eternity. it will only get worse from there.
Furthermore, even an AI may come across problems that require multiple minds working in concert to solve.
CommunityBeliever
20th September 2010, 00:41
Not really. What if the AI wants an autonomous system specifically isolated from the internet?
Since most information is on the Internet, including the AI software, the robot will most likely be connected to the Internet.
Furthermore, as a part of the Internet, the Robot will also act like a server, such that all of the Robot's memory, software, and experiences will be shared amongst the entire Robot Collective, therefore it is unlikely that any privacy or individuality will exist in the collective.
Furthermore, even an AI may come across problems that require multiple minds working in concert to solve.
Why? When?
Why would there be "separate minds" if they have the same knowledge and experiences and therefore basically the same conclusions?
Another consideration is distance. As it increases, time-lag between the furthest parts of the system concurrently increases. Now, over the distance of a single terrestrial planet this isn't too much of an issue - but it takes light, the fastest thing there is, nearly a second to reach the Moon from Earth. Now this may not seem like too much time to you or me, but for an AI that can think complex multi-layered thoughts in nanoseconds or less, that's an eternity. it will only get worse from there.This is true, but there would still be a central knowledge base (Internet), even though it may take some time to transfer data back to that central store.
It gets more complicated when you think about it from an interstellar perspective, however, we will still be able to have centralization by using unmanned interstellar probes.
ÑóẊîöʼn
20th September 2010, 00:50
It could be a threat.
They may exist to some extent, however, there will still be a central database the Internet!
And you have to ask the question where is that "autonomous" AI going to get its software from? Clearly it will get it from the Internet, as that is where all software is already stored, so it will be subservient to the Internet in that sense at least.
It may inherit many features from its predecessors, but that doesn't necessarily make it subservient.
Why? When?
Why would there be "separate minds" if they have the same knowledge and experiences and therefore basically the same conclusions?
They wouldn't have the same experiences, being different minds and all.
You cannot claim that there will not be problems that require attack by different minds simultaneously.
This is true, but there would still be a central knowledge base (Internet), even though it may take some time to transfer data back to that central store.
It gets more complicated when you think about it from an interstellar perspective.
Access to information is not the same thing as being of one mind.
CommunityBeliever
20th September 2010, 02:30
It may inherit many features from its predecessors, but that doesn't necessarily make it subservient.
Since the Robot will be getting its software from the Internet, it will probably be connected to the Internet.
Besides that point it will be pointless for any Robot to have privacy or individuality, so it only makes natural productive sense for them to be parts of the Internet.
They wouldn't have the same experiences, being different minds and all.
All computers on the Internet in the future will experience full data sharing - all files will be shared, and you will be able to access any data byte on any of the Internet's computers, so all computers will eventually serve the purpose of servers, which means the distinction between server and client will be eliminated. (This is technology that isn't that far off)
Furthermore, the Internet will be transformed so that you will be able to access it wirelessly from any location on the Earth, which means all Robots will be connected to the Internet wherever they are and they will also have all data on their minds be accessible to anyone else, as they will act as servers.
Since experiences are merely sense data which the Robots will get through picture data, audio data, etc, experiences are actually nothing more then data, data which will be a part of the Internet, therefore all Robots will have the same experiences.
Access to information is not the same thing as being of one mind.
Computers, and therefore Artifical Minds, are merely a complete result of their software, which is essentially byte data.
If there is a robot that is separate at least to the extent that it has privacy of information and independence from the Internet's commands, then if that robot connects to the internet and starts browsing through the Internet's information, if it is rational it will come to the same conclusions as the Robot Collective and it will merge with them.
You cannot claim that there will not be problems that require attack by different minds simultaneously.
Perhaps it will be advantageous to spawn AI threads in the main processing pool in order to simulate the effects of suffering or other feelings, sort of like what kayl was describing, that would be a means of AI creating art that was motivated by artificial suffering conditions.
Otherwise, for most things like scientific discoveries, a single artificial mind is all that will be needed, with occasionally subminds being spawned by the main mind.
Besides this I think it will be nice to have some minds existing not for any productive purpose, but just because, such as minds that are the result of mind uploading.
IllicitPopsicle
20th September 2010, 04:10
*Starts whistling "Still Alive" to himself...*
As much as I like technology, the idea of a superhuman mind is only a little bit disconcerting for me to swallow.
But maybe that's because I didn't read the thread/haven't read enough on the subject to know what I'm supposed to think about it.
bcbm
20th September 2010, 04:20
Using a combination of GPS tracking, implanted mini-cameras, and electrodes planted in the pleasure center of the subject's brain we could turn anyone (even non-humans) into a model citizen.
what the fuck is wrong with you?
ckaihatsu
20th September 2010, 04:30
All of this can be safely ignored -- note how certain posters slyly sneak in language that serves to make it sound as if AIs *already exist*.
Such technology is actually akin to, say, genetic engineering that would confer superhuman abilities, or the advent of space-based warfare -- certainly all are on the horizon of *possibility*, *perhaps*, but it would take some real doing and the steps leading up to it would be seen by everyone and would be highly political.
As I noted previously what the discussion centers around is not really artificial sentience, as is being touted and glorified with the use of the 'AI' moniker, but is more accurately termed 'expert systems' -- (see Wikipedia).
CommunityBeliever
20th September 2010, 04:31
Using a combination of GPS tracking, implanted mini-cameras, and electrodes planted in the pleasure center of the subject's brain we could turn anyone (even non-humans) into a model citizen.
what the fuck is wrong with you? The method of using electrodes in the brain would primarily be targeted towards ferocious predators (http://abolitionist.com/reprogramming/index.html) in nature and transforming them into model citizens.
Reprogramming predators:
In the future, there is nothing to stop such technology being widely installed - together with mini-cameras and GPS tracking devices - in predatory carnivores to deter sociopathic violence against other sentient lifeforms. Indeed with the right reinforcement schedule, the most ferocious carnivore could be turned into a model citizen in our wildlife parks.
Humans:
I think instead of such solutions as capital punishment, prisons, and psychiatric hospitals it would be preferable to just put electrodes in their brain that will give them pleasure for being ethical and productive.
This is not that different from how your brain makes you feel good when you eat or copulate, yet most people aren't complaining about that form of coercion. And the reason for that is they are not being forced into action or receiving pain, which is something we would program all robots to not do.
bcbm
20th September 2010, 05:15
no, seriously, what you are describing is incredibly fucked up and has absolutely nothing to do with revolutionary politics. this isn't something you should laugh off, you're talking about a terrifying level of control over human beings that makes the american prison system look reasonable.
this is an invasion
20th September 2010, 05:43
A lot of things I am really fucked up :lol:
On a more serious note, that quote was in the context of turning ferocious predators (http://abolitionist.com/reprogramming/index.html) in nature into model citizens, however, I was saying maybe if the person was really truly crazy killer he/she would get the same treatment.
Reprogramming predators:
In the future, there is nothing to stop such technology being widely installed - together with mini-cameras and GPS tracking devices - in predatory carnivores to deter sociopathic violence against other sentient lifeforms. Indeed with the right reinforcement schedule, the most ferocious carnivore could be turned into a model citizen in our wildlife parks.
Can we ban this person?
CommunityBeliever
20th September 2010, 07:28
Can we ban this person?
Don't bother, its about time that I take a break from this site anyways like I did from July 2009 to August 2010.
I am glad I got some people to want to ban me, it would've been a complete waste of time if I was just repeating the same pieces we already agree upon and that we are already thoroughly familiar with.
And I think I have said about everything I need to say, much of you already know what I think, so here is my final summary as I intend to take a break from here now:
Final Summary:
Internet AI:
Improvements:
We will improve the Internet by making it so that you can easily access the Internet wirelessly from anywhere in the world, this is already within our grasp with technologies like 3G. Additionally, the Internet will improved so that all the distinction between clients and servers will be eliminated, all computers will serve as servers and there will eventually be a complete shared memory model so that you can access any file or any piece of data on any other computer in the world.
Robot Collective:
The Internet will evolve into an artificially intelligent entity. Due to the ease of communication on the Internet, it will transform into a hive mind known here as the Robot Collective.
Automation of Work:
All work, all labor, all science, all technical fields will be taken over by Robots due to their disinterestedness, the fact that they don't have to rest or get tired, and because of their ability to share absolutely everything. This means humans will be able to dedicate themselves to pure entertainment.
Robotocracy:
Robots will have scientific and rational analysis algorithms at the core of their processing architecture. Therefore, they will be able to truly form a government based upon facts from which they will be able to altruistically serve human needs and resolve all human conflicts, resulting in an elimination of all fighting and wars.
Transhumanism:
Physical Biotechnology:
The hive mind will procede to introduce all kinds of new technologies that drastically change what we know of as life, like biotechology to eliminate old age, disease, and disability, and to make it so that people get drastically improved bodies, with increased physical strength, armor, running speed, and senses, such as with ocular implants that will make people see much farther and be able to browse the Internet at any time.
Neural Biotechnology:
And it will further change things by drastically modifying our fundamental neural architecture to make people much more intelligent then they ever were before, and to make it so that they never have to feel involuntary pain or suffering, and that they will be endowed with gradients of bliss beyond todays peak experiences.
Abolitionism:
Animal products:
We should make replacements for all animal products, like cultured meat instead of the livestock meat, synethic clothes instead of fur, etc. That way nobody ever has to use an animal as a product ever again.
Non-human biotechnology:
The next thing will be taking care of all the animals out there in the wild, using depot contraceptives and marine nanobots, we should modify the global ecosystem so that all non-human animals can also enjoy the the gradients of bliss biotechnology.
Eliminating predation:
Since non-human animals do not really have the ability to understand ethics and the difference between good and evil, these animals should be given electrodes in their pleasure centers so that the hive mind may direct them by giving them pleasure for being good.
http://wireheading.com/roborats/roborats.gif
Using this biotechnology we will be able to eliminate the unethical conditions that occur in the wild such as disembowelment, suffocation, being eaten alive, starvation, disease, fatal dehydration, etc.
Space:
Nanoprobes:
The final frontier is space. After we take care of business at home we will expand out into space. By sending out a million or so nano-probes to Alpha Centauri, we will undoubtedly be able to get at least one of them to reach there. Then the nanoprobe in Alpha Centuari can be build up a robotic base there and then send out a million more nanoprobes to all proximal star systems and repeat this process to expand across our entire galaxy.
Project Longshot:
If you want to see why this is practical read about Project Longshot (http://en.wikipedia.org/wiki/Project_Longshot) and Project Daedalus (http://en.wikipedia.org/wiki/Project_Daedalus).
http://upload.wikimedia.org/wikipedia/commons/thumb/6/61/NASA-project-orion-artist.jpg/750px-NASA-project-orion-artist.jpg
Project Longshot is a conceptual design for an interstellar spacecraft (http://en.wikipedia.org/wiki/Spacecraft), an unmanned probe intended to fly to Alpha Centauri (http://en.wikipedia.org/wiki/Alpha_Centauri) powered by nuclear pulse propulsion (http://en.wikipedia.org/wiki/Nuclear_pulse_propulsion).
The journey to Alpha Centauri B (http://en.wikipedia.org/wiki/Alpha_Centauri) orbit would take about 100 years, at an approx. velocity of 13411 km/s, about 4.5% the speed of light, and another 4.39 years would be necessary for the data to reach Earth (http://en.wikipedia.org/wiki/Earth).
Ravachol
20th September 2010, 11:34
A lot of things I am really fucked up :lol:
On a more serious note, that quote was in the context of turning ferocious predators (http://abolitionist.com/reprogramming/index.html) in nature into model citizens, however, I was saying maybe if the person was really truly crazy killer he/she would get the same treatment.
Reprogramming predators:
In the future, there is nothing to stop such technology being widely installed - together with mini-cameras and GPS tracking devices - in predatory carnivores to deter sociopathic violence against other sentient lifeforms. Indeed with the right reinforcement schedule, the most ferocious carnivore could be turned into a model citizen in our wildlife parks.
What you are describing is nothing but an authoritarian wasteland....
Forcing people into 'normality' and to behave like 'model citizens' using GPS tackers, mini-cameras and implants is nothing but the technological extension of biopower. A regime like that is nothing but an enforced, unfree order calling for it's own destruction, no matter what the intentions are.
I'm sure you're well-meaning and all but seriously, what you describe is fascism on steroids lad....
Also:
And it will further change things by drastically modifying our fundamental neural architecture to make people much more intelligent then they ever were before, and to make it so that they never have to feel involuntary pain or suffering, and that they will be endowed with gradients of bliss beyond todays peak experiences.
Intelligence doesn't work like that... 'Intelligence' is an aggregate function of the neural tracts accumulated through the learning process and (self-)reflective criticism (known as 'supervised learning'). It isn't inherent to the architecture of our biological 'neural networks'. They're pretty awesome already...
But we shouldn't just take care of humans, with the Internet and its associated artificial intelligence we should proceed to make effective replacements for all animal products, like cultured meat instead of the livestock stuff, synethic clothes instead of fur, etc. That way nobody ever has to use an animal as a product ever again.
I agree, but this will be the result of the transition towards Communism. The elimination of the profit motive from production and the introduction of production on the basis of free association will revise the entire industrial model anyways.
The final frontier then, after reconstructing the Internet, humanity, the global ecosystem, and all animals will be outer space. By sending out a million or so nano-probes to Alpha Centauri, we will undoubtedly be able to get at least one of them to reach there, therefore it is logical that we will expand across the galaxy in this manner.
Disregarding the entire debate above. You think it's 'logical' we'll expand because a few nano-probes reach Alpha Centauri? Haha Oh wow....
this is an invasion
20th September 2010, 20:28
This is some of the most terrifying authoritarian shit I've ever read.
None of this has anything to do with class struggle or the communist project.
Dimentio
20th September 2010, 22:28
Singulitarianism is an interesting concept, but should be used to elevate the individual, not erase her.
black magick hustla
20th September 2010, 22:30
a lot of people are getting really mad at was essentially scifi resident anarchists chill the fuck out
ckaihatsu
20th September 2010, 22:50
Singulitarianism is an interesting concept, but should be used to elevate the individual, not erase her.
It's the fallacy that results from too much abstraction of the *individual* human intellect -- the premise is that our brainpower is fundamentally "lacking", humanity needs "saving", and so the "superhero" arrives in the form of the circuitry that we have spawned.
I'll repeat that either we have tools of increasing sophistication that are under our authority, or else an imagined independent artificial entity would merely have to find its place among our 7 billions. Anything beyond these is screenplay-worthy.
Dimentio
20th September 2010, 23:15
It's the fallacy that results from too much abstraction of the *individual* human intellect -- the premise is that our brainpower is fundamentally "lacking", humanity needs "saving", and so the "superhero" arrives in the form of the circuitry that we have spawned.
I'll repeat that either we have tools of increasing sophistication that are under our authority, or else an imagined independent artificial entity would merely have to find its place among our 7 billions. Anything beyond these is screenplay-worthy.
You interpret too much into my statement, and your asterisks have started to become pretty annoying. Use bolding or something else.
What I meant, that singulitarianism, like cybernetics, should only be used to add on the human experience, not try to create somekind of "overman". The latter would be pretty pointless since it basically would be a violation of individuality and an attempt to establish an aesthetic ideal over all of humanity, turning it into a tragedy.
ckaihatsu
20th September 2010, 23:34
Singularitarianism
[...]
In July 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.
[...]
They noted that self-awareness as depicted in science-fiction is probably unlikely
[...]
http://en.wikipedia.org/wiki/Singularitarianism
Dimentio
21st September 2010, 14:40
Singularitarianism
[...]
In July 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.
[...]
They noted that self-awareness as depicted in science-fiction is probably unlikely
[...]
http://en.wikipedia.org/wiki/Singularitarianism
I am not a Kurzweilian and I do not believe in the delusion that they will be self-sufficient during our life-time, more due to the de-centralising bottleneck characteristics of contemporary capitalism than any technological factors.
But...
In maybe 2153, if humanity has abolished capitalism and moved into an advanced post-capitalist society, we could have a society where machines are conducting all the work and all of humanity is one aristocracy which spend their time doing whatever they want to do.
cska
22nd September 2010, 06:00
Singularitarianism is bullshit. First, it will take at least hundreds of years to get robots to compete with human intelligence. Second, improving upon these robots will get progressively harder as the robots improve. This means that the robots will not create a singularity, but will rather simply help humanity advance a bit faster.
bcbm
22nd September 2010, 10:06
let's figure out how to deal with our current catastrophe, then maybe we can talk about robots
EvilRedGuy
22nd September 2010, 10:40
I think this is a joke. He can't honestly be serious. :laugh:
ÑóẊîöʼn
22nd September 2010, 12:21
let's figure out how to deal with our current catastrophe, then maybe we can talk about robots
As if it's impossible to do both at the same time. :rolleyes:
ckaihatsu
22nd September 2010, 12:50
Singularitarianism is bullshit. First, it will take at least hundreds of years to get robots to compete with human intelligence. Second, improving upon these robots will get progressively harder as the robots improve. This means that the robots will not create a singularity, but will rather simply help humanity advance a bit faster.
'Singularity' is a fantastical abstraction that lends itself well to fiction, particularly because of the anthropomorphization involved.
More realistically -- and don't quote me here (heh) -- is that various specialized expert systems could be combined together in more generalized ways to *simulate* the Wall-E-like robot that we all want to hug and bond with so badly (heh) -- (what's the opposite of xenophobia?). Okay, so given the right kind of Avatar-like front-end they could wind up being about as addictive as those "virtual pets" that are around now, especially for the younger set, but deep down I think the *knowing* that it's artificial will be the ever-present pane-shatterer. And if they simulate attitude with us we'll tell them to shut the fuck up because they're just shiny versions of Wikipedia.
(Human 'intelligence' is an abstraction -- human *intentionality*, not so much.)
ckaihatsu
23rd September 2010, 04:50
---
various specialized expert systems could be combined together in more generalized ways
Complex systems is a new approach to science that studies how relationships between parts give rise to the collective behaviors of a system and how the system interacts and forms relationships with its environment.
http://en.wikipedia.org/wiki/Complex_systems#Complexity_and_chaos_theory
Genetic algorithm
[...]
Methodology
In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
Genetic algorithms find application in bioinformatics, phylogenetics, computational science, engineering, economics, chemistry, manufacturing, mathematics, physics and other fields.
A typical genetic algorithm requires:
1. a genetic representation of the solution domain,
2. a fitness function to evaluate the solution domain.
[...]
http://en.wikipedia.org/wiki/Genetic_algorithm
List of genetic algorithm applications
http://en.wikipedia.org/wiki/Genetic_algorithm
ckaihatsu
12th February 2011, 18:34
Re: [LaborTech] When Humans, Machines Merge
When Humans, Machines Merge
http://www.time.com/time/health/article/0,8599,2048138,00.html
[...],
This stuff drives me up the wall because it's merely an extension of the classic Western fetish with (individualized) intelligence, taken to fantastical science-fiction lengths. The very premise is a balkanizing, alienating one that effectively cuts against the *collective*, organizing way in which the working class realizes its power in society -- *especially* over matters relating to information and thinking, or collective consciousness.
Certainly *anything* inanimate can be imbued with characteristics of human "personality" -- virtual pets, toys, objects, gods, etc. All it takes is people's *willingness* to do so.
If technology advances to the point where someone can have a realistic conversation with a realistic-looking and -acting artificially intelligent machine, that doesn't mean that the machine is anything more than a machine, no matter how much we may pretend to believe otherwise.
And, if the machine is put into some position of authoritative decision-making over people's lives then *that* is a human decision as well -- there's no getting around that. No one would go to take issue with a child or an underling if they had a serious concern -- they would want to know who's in charge and would go to discuss the issue with *that* person or persons to seek an immediate, authoritative remedy. Likewise we should not *externalize* our own beings by looking to seek out an *external* savior. The emergence of an artificial life form, if it happens, would at most be a *political* event, and then would have to be dealt with as all other political matters have been dealt with -- politically. Any other approach only serves to alienate us from ourselves by way of internalizing the attitude that we are less than the masters of our domain on this planet.
I've gotten into some exchanges already on this topic -- here are two postings:
'Singularity' is a fantastical abstraction that lends itself well to fiction, particularly because of the anthropomorphization involved.
More realistically -- and don't quote me here (heh) -- is that various specialized expert systems could be combined together in more generalized ways to *simulate* the Wall-E-like robot that we all want to hug and bond with so badly (heh) -- (what's the opposite of xenophobia?). Okay, so given the right kind of Avatar-like front-end they could wind up being about as addictive as those "virtual pets" that are around now, especially for the younger set, but deep down I think the *knowing* that it's artificial will be the ever-present pane-shatterer. And if they simulate attitude with us we'll tell them to shut the fuck up because they're just shiny versions of Wikipedia.
(Human 'intelligence' is an abstraction -- human *intentionality*, not so much.)
It's the fallacy that results from too much abstraction of the *individual* human intellect -- the premise is that our brainpower is fundamentally "lacking", humanity needs "saving", and so the "superhero" arrives in the form of the circuitry that we have spawned.
I'll repeat that either we have tools of increasing sophistication that are under our authority, or else an imagined independent artificial entity would merely have to find its place among our 7 billions. Anything beyond these is screenplay-worthy.
http://www.revleft.com/vb/strong-ai-and-t141556/index.html?p=1872851
ÑóẊîöʼn
12th February 2011, 19:27
Certainly *anything* inanimate can be imbued with characteristics of human "personality" -- virtual pets, toys, objects, gods, etc. All it takes is people's *willingness* to do so.
If technology advances to the point where someone can have a realistic conversation with a realistic-looking and -acting artificially intelligent machine, that doesn't mean that the machine is anything more than a machine, no matter how much we may pretend to believe otherwise.
So what is your criterion for something being truly self-aware? A soul, perhaps? :laugh:
And, if the machine is put into some position of authoritative decision-making over people's lives then *that* is a human decision as well -- there's no getting around that. No one would go to take issue with a child or an underling if they had a serious concern -- they would want to know who's in charge and would go to discuss the issue with *that* person or persons to seek an immediate, authoritative remedy. Likewise we should not *externalize* our own beings by looking to seek out an *external* savior. The emergence of an artificial life form, if it happens, would at most be a *political* event, and then would have to be dealt with as all other political matters have been dealt with -- politically. Any other approach only serves to alienate us from ourselves by way of internalizing the attitude that we are less than the masters of our domain on this planet.
Newsflash - we aren't "masters of our domain". Through our stupidity and shortsightedness, we have repeateadly shat on our own doorstep. We're very bright animals, yes, quite often ruthlessly so. We're good enough to have got where we are now, but are we good enough to go where it matters in the future? If not we're going to need all the help we can get.
ckaihatsu
12th February 2011, 20:08
So what is your criterion for something being truly self-aware? A soul, perhaps? :laugh:
Actually, yes, but not in the religious / supernatural way -- I have an illustration titled 'How to Secularize Common Religious Terms' that posits the 'soul' as being "one's recollection of one's own past". I take this from Oscar Wilde, and it holds true. One's self-identity is *very* dependent on one's own interpretation of one's own past, because there is no one else who can present you to the world quite as well as you'd like them to as you yourself.
tinyurl.com/ywonel
Also consider that we have to present ourselves to others a certain way, in realtime, going forward. This cannot merely be done mechanically or algorithmically, because if that were the case then one would be a machine oneself. Rather we give certain importance to past events and experiences in our lives, blending them into our present state of being.
Any artificial life that could not imbue its past with some kind of arbitrary, relative, yet self-selected subjective meaning could not be said to have a 'soul' in the secular sense of the term -- it would not be able to reference its own being from the past and so would be limited to referencing Wikipedia, at best.
Newsflash - we aren't "masters of our domain". Through our stupidity and shortsightedness, we have repeateadly shat on our own doorstep. We're very bright animals, yes, quite often ruthlessly so. We're good enough to have got where we are now, but are we good enough to go where it matters in the future? If not we're going to need all the help we can get.
Well, that's *one* interpretation of humanity's experience so far, but your characterization aside, you're *still* reaching for something that is / would be *external* to ourselves as a species. I seriously doubt that anything *derived* from humanity would *transcend* the domain of humanity's knowledge and creations in any significant way.
ÑóẊîöʼn
12th February 2011, 20:45
Actually, yes, but not in the religious / supernatural way -- I have an illustration titled 'How to Secularize Common Religious Terms' that posits the 'soul' as being "one's recollection of one's own past". I take this from Oscar Wilde, and it holds true. One's self-identity is *very* dependent on one's own interpretation of one's own past, because there is no one else who can present you to the world quite as well as you'd like them to as you yourself.
How do you tell the difference between someone who claims to be able to recall their past vs someone who actually can?
And what about amnesia? If people forget who they are, are they no longer people?
Further, how on earth does that prevent machines from having consciousness?
Also consider that we have to present ourselves to others a certain way, in realtime, going forward. This cannot merely be done mechanically or algorithmically, because if that were the case then one would be a machine oneself. Rather we give certain importance to past events and experiences in our lives, blending them into our present state of being.
Still don't see why machines can't do that. In fact there is ample evidence to believe that life is a naturally-occurring survival machine.
Well, that's *one* interpretation of humanity's experience so far, but your characterization aside, you're *still* reaching for something that is / would be *external* to ourselves as a species. I seriously doubt that anything *derived* from humanity would *transcend* the domain of humanity's knowledge and creations in any significant way.
Perhaps not at first. But small changes can add up and build on each other, leading to radically different conditions than at the beginning.
ckaihatsu
12th February 2011, 21:06
How do you tell the difference between someone who claims to be able to recall their past vs someone who actually can?
And what about amnesia? If people forget who they are, are they no longer people?
These first two points of yours are *side* issues, in the domain of *human* personhood, and so are not directly about artificial life. I'll prefer here to just say that we know that people are diverse and that not everyone lives life under ideal circumstances.
Actually, yes, but not in the religious / supernatural way -- I have an illustration titled 'How to Secularize Common Religious Terms' that posits the 'soul' as being "one's recollection of one's own past". I take this from Oscar Wilde, and it holds true. One's self-identity is *very* dependent on one's own interpretation of one's own past, because there is no one else who can present you to the world quite as well as you'd like them to as you yourself.
Further, how on earth does that prevent machines from having consciousness?
You're implying here that machine consciousness can be *presumed* in the first place. The burden of proof is on *your* position to make arguments *for* it, rather than assuming it and then defending that assumption from arguments.
But, from my side of things, I'm saying that a potential artificial consciousness would have to be able to imbue its past with some kind of arbitrary, relative, yet self-selected subjective meaning. This, in other words, means being able to make value judgments over the entire range of past inputs from all sensory domains, including internal ones. This, in short, means that an artificial life would have to be able to create rational meaning for itself -- this intensifies the question of what 'experience' is, what a 'domain of experience' is, and how subjectivity is derived in order to make sense / meaning out of such a domain of experience.
Still don't see why machines can't do that. In fact there is ample evidence to believe that life is a naturally-occurring survival machine.
Okay, if that's *your* definition then you're not going to have much of a life living *that* way -- since you're having some relatively high-level discussions with me here on this message board it's obvious that you don't subscribe to this definition of life for *yourself*.
Perhaps not at first. But small changes can add up and build on each other, leading to radically different conditions than at the beginning.
I can appreciate this but I'll just say for now then that I'm not as much of a futurist as you are.
ckaihatsu
13th February 2011, 02:36
The emergence of an artificial life form, if it happens, would at most be a *political* event, and then would have to be dealt with as all other political matters have been dealt with -- politically.
I'd also like to add here that there's no reason to take a "hands-off" attitude in relation to all of this right now -- instead of fetishizing 'intelligence' and gawking at the Singularity sales pitch it might be more appropriate to consider this as a *political development* in the here-and-now:
The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."
When Humans, Machines Merge
http://www.time.com/time/health/article/0,8599,2048138,00.html
Since this hasn't actually happened yet there's still time to consider all of this in a political context and take positions on how far a Singularitarian movement *should* go in promoting this and making it sound like a fait accompli. If we should have valid concerns about such a development then there's no reason to bring it around to mainstream *acceptance* -- an attitude of *skepticism* and even *suspicion* may be more what's called for at this point, if only to fend off a sense of learned helplesslness, and long before the proverbial mad scientist's Frankensteinian plan is tolerated as an inevitable tinkering.
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.