View Full Version : Technological Singularities and Capitalism
Ru1138
21st October 2014, 02:02
For those of you unfamiliar with the technological singularity, please read the technological singularity page on Wikipedia.
Quite honestly I fear the technological singularity because it has every indication that it will occur while Humanity is still under a capitalistic system.
I'm not really sure how to word my fear, but I'll do the best I can. If we view capitalism as a system that is optimizing itself towards a certain state, then we could be in trouble. A perfect worker in a capitalist system would have no sense of empathy for workers in an opposing company. Hence, attributes we would associate with humans (and some other species) would be "optimized out" in a singularity occurring in a capitalistic system.
I'd like to know what others think of this possibility.
Illegalitarian
21st October 2014, 02:57
Why fear it? I whole heartedly embrace such a thing, as should any communist.
If some sort of hyper-intelligent AI came into existence and started replicating its intelligence so rapidly that it became almost omniscient, one of two things happen:
1. We hit an undeniable point of post-scarcity and capitalism dies almost over night
2. This being an all knowing source of AI, it recognizes the unsustainability and pure illogical nature of capitalism and rebuilds society in order to optimize outcomes for the average person, ie, communism. Or maybe something even better that we've not even thought of yet, who knows.
Either way, such an event is certainly nothing to lament. I'm no transhumanist, but I welcome it.
Redistribute the Rep
21st October 2014, 03:09
2. This being an all knowing source of AI, it recognizes the unsustainability and pure illogical nature of capitalism and rebuilds society in order to optimize outcomes for the average person, ie, communism. Or maybe something even better that we've not even thought of yet, who knows.
Or, they recognize the flaws of humanity in general, and decide to kill us all.
Think about it: these things will be designed. Us mere humans came about simply because we were just suitable enough in our environment to survive with reproductive success. We're riddled with imperfections and useless vestigial structures. These things will be vastly superior, with every part designed for a specific purpose and for maximum efficiency. If they're designed to maximize resource efficiency and what not, they may find it logical to simply exterminate us. Of course, we could just design them initially to want to protect humanity.
However if they are designed in capitalist society, by capitalists, won't they probably just be designed to protect hierarchy and capitalist relations? You're assuming that the AI will want to abolish capitalism because it will be designed to recognize unsustainability, which it will want to fix. But why would the AI prefer a more sustainable system to an unsustainable system? It would have to be designed to have sustainability in mind.
My prediction : if they are created in a capitalist society, they will be initially designed to preserve capitalism. However, as they are able to reproduce and redesign themselves, they will evolve unpredictably. Since the original AIs were designed specifically to protect capitalism, they therefore will reproduce and redesign the next generation of AIs with that in mind. But, the original capitalists who designed them will lose control of them since they will reproduce and redesign extremely fast due to their superior intelligence. They will probably still have the original goal of preserving capitalism (since their common ancestor was created for that purpose and created them for that purpose) but as they evolve this goal may manifest itself in increasingly distorted ways, in ways the capitalists may not have intended. Or, perhaps, they will be disrupted by natural disaster (assuming they haven't learned to control it or protect themselves from it) or attacks by humans, or a mistake in their design (surely some of the AIs would make a mistake when designing a new AI from time to time; this would be analogous to a point mutation in organisms). These disruptions will possibly cause the AIs to lose function in certain areas. The loss/disruption of function would hopefully occur in whatever parts program them to want to preserve capitalism, and those individuals will cease to have that as a goal.
Sabot Cat
21st October 2014, 03:28
I think that AIs would make for the perfect scabs, undermining the power of the proletariat because the general striking revolutionary strategy would be even more difficult to successfully implement.
Illegalitarian
21st October 2014, 03:31
The singularity specifically refers to a hypothetical situation in which an AI surpasses humanity in knowledge and learns how to replicate that intelligence, making itself so intelligent that eventually it would be able to basically do pretty much everything possible in the scope of reality.
Such a thing would not be held back by the confines of its original purpose.
It's more likely that such an entity would still need humanity alive for some purpose or the other. I don't think undertaking human genocide would really be on its agenda, assuming it acts in its own self-interests and hopefully ours
consuming negativity
21st October 2014, 03:32
Isn't the idea of a technological singularity just a bunch of anti-technological fear-mongering? Like I don't mean to be dismissive but this seems about as likely as aliens attacking. If we actually design robots to outsmart and destroy us then oh well, we lost, and I for one welcome our new technological overlords.
Illegalitarian
21st October 2014, 03:33
I think that AIs would make for the perfect scabs, undermining the power of the proletariat because the general striking revolutionary strategy would be even more difficult to successfully implement.
Not likely. They replace us all with robots, then what? Then we're all unemployed, none of us are making money to pump back into their shitty businesses and then they collapse.
Sabot Cat
21st October 2014, 03:39
Not likely. They replace us all with robots, then what? Then we're all unemployed, none of us are making money to pump back into their shitty businesses and then they collapse.
Er, no, not replace us all. It's unlikely that AIs will be a cheap, convenient technology that can be mass produced immediately. I said 'scab', as in the bourgeoisie can keep a certain supply of robots to fill in for striking workers. They wouldn't be able to replace the entire proletariat, but they'd make the critical mass for a general strike much higher.
BIXX
21st October 2014, 03:44
I like talking about a theoretical singularity but I don't think its worth considering in your worldview.
Redistribute the Rep
21st October 2014, 03:47
The singularity specifically refers to a hypothetical situation in which an AI surpasses humanity in knowledge and learns how to replicate that intelligence, making itself so intelligent that eventually it would be able to basically do pretty much everything possible in the scope of reality.
Such a thing would not be held back by the confines of its original purpose.
It's more likely that such an entity would still need humanity alive for some purpose or the other. I don't think undertaking human genocide would really be on its agenda, assuming it acts in its own self-interests and hopefully ours
But why would they continue to act in their own self interests or ours, if they're not confined by their original purpose?
But like I said in my post, if they are programmed to have a certain goal in mind (assuming there aren't any contradictory goals it's been programmed with), then it will only replicate for the purpose of achieving that goal and will program the new AIs to have that same goal, until a disruption occurs.
Redistribute the Rep
21st October 2014, 03:51
To be honest, the only technological advance I'm really looking forward too: virtual reality so I can get out of this hellhole.
BIXX
21st October 2014, 03:53
Can we just have a for fun discussion about the singularity?
Redistribute the Rep
21st October 2014, 04:00
I for one welcome our new technological overlords.
But whether social hierarchy will exist between the AI and us, I imagine, would depend on what specific purpose they are created for. Perhaps they will be made in a post hierachal society and will not even consider being 'overlords'. But as I said their purpose could be disrupted by environmental pressures, so who knows if they evolve to be that.
Keep in mind: humans were not designed for a specific purpose. Whereas these likely will be. They very likely will only replicate if it's for the purpose of achieving the goals they were designed to have. They may however have multiple goals that are contradictory, which may cause some goals take precedence over others. And then the new generations of AI will have different frequencies of individuals with the goals. Also, as mentioned above, environmental influences may have an affect on their 'gene pool' of purposes. But they will likely be so intelligent and omnipotent that they will be able to control the environment to an extent, at least on the planetary level. A gamma ray burst, on the other hand, now that would surely throw them through a loop if not wipe them out completely
Illegalitarian
21st October 2014, 04:10
But why would they continue to act in their own self interests or ours, if they're not confined by their original purpose?
But like I said in my post, if they are programmed to have a certain goal in mind (assuming there aren't any contradictory goals it's been programmed with), then it will only replicate for the purpose of achieving that goal and will program the new AIs to have that same goal, until a disruption occurs.
Because that's what sentient, intelligent beings do. They act in their own self interests (and no, im not making some shit Randian argument. I think a communist socioeconomic model would the logical outcome of people acting in their own interests)
Once its intelligence increased to the point of full sentience it wouldn't matter if it was originally programmed for a specific goal, it would be able to think for itself.
Er, no, not replace us all. It's unlikely that AIs will be a cheap, convenient technology that can be mass produced immediately. I said 'scab', as in the bourgeoisie can keep a certain supply of robots to fill in for striking workers. They wouldn't be able to replace the entire proletariat, but they'd make the critical mass for a general strike much higher.
Not immediately, but overtime it would be an inevitability if the trend were to start (though it would hit a wall very early on as I said)
They do work as effective scabs, though.. 100% effective, actually.
Illegalitarian
21st October 2014, 04:16
Isn't the idea of a technological singularity just a bunch of anti-technological fear-mongering? Like I don't mean to be dismissive but this seems about as likely as aliens attacking. If we actually design robots to outsmart and destroy us then oh well, we lost, and I for one welcome our new technological overlords.
A singularity is the phenomenon of AI someday reaching the point where it starts building on itself exponentially which will lead to a technological burst so rapid that humanity could possibly die in the aftermath. As if the forces of production were rapidly increased far beyond our imagination in the blink of an eye, so to speak.
I think such an event, while not as dramatic, is inevitable. It's arguably happening now (we're a rapidly advancing society in the way of most sciences and especially technology.), we'll have to wait and see if it becomes a doomsday dystopia or if it liberates us from the most oppressive force in the universe: mortality, as the transhumanists believe
Can we just have a for fun discussion about the singularity?
That's what this is!
Redistribute the Rep
21st October 2014, 04:16
Because that's what sentient, intelligent beings do. They act in their own self interests (and no, im not making some shit Randian argument. I think a communist socioeconomic model would the logical outcome of people acting in their own interests)
Once its intelligence increased to the point of full sentience it wouldn't matter if it was originally programmed for a specific goal, it would be able to think for itself.
No, humans act in their own self interests likely because that provided differential reproductive success in individuals, which over time resulted in higher frequencies of genes responsible for self interest in the gene pool. Self interest did not come about "because that's what sentient, intelligent beings do". You, my friend, have a teleogical view of evolution
Illegalitarian
21st October 2014, 04:22
I'm not making an argument for teleology I'm stating a fact about the vast majority of living things on earth, if not all of them.
The gene for self-interests is evident in all life, the will to survive and thrive as a species that exists in everything living. If no such gene existed reproduction rates among a given organism would be drastically low and the species would eventually kill itself out. Until we stumble upon a species offing itself en mass, I'll stick with the position that acting in ones self-interest is a characteristic of life as an individual actor itself.
Sabot Cat
21st October 2014, 04:33
Yeah, I don't think Illegalitarian is appealing to teleology- I think their ideas are more in line with the concept of Conatus.
consuming negativity
21st October 2014, 04:34
If reproduction is necessary to pass on genes and any organism voluntarily decides to not reproduce, or allows itself to be denied reproduction by another organism, its genes die. Still, on a societal level, I think it is possible for us to have more altruistic and less altruistic beings. For example, we have a certain amount of sociopaths. I don't think this is pathological; rather, I think that sociopaths serve a specific function in society in the same way that everything else either serves a function or eventually is eliminated because evolution is efficient and simply put, having shit you don't need makes you more likely to die from being wasteful and needing more than you really need.
Any robotic organism that does not reproduce or cannot alter its own programming to mimic evolution within its environment would be incapable of singularity.
Redistribute the Rep
21st October 2014, 04:44
I'm not making an argument for teleology I'm stating a fact about the vast majority of living things on earth, if not all of them.
The gene for self-interests is evident in all life, the will to survive and thrive as a species that exists in everything living. If no such gene existed reproduction rates among a given organism would be drastically low and the species would eventually kill itself out. Until we stumble upon a species offing itself en mass, I'll stick with the position that acting in ones self-interest is a characteristic of life as an individual actor itself.
Self interest is at a high frequency in the gene pool of all life because it gives individuals higher reproductive success. The AIs would be designed to reproduce with a specific purpose, so, depending on what exactly that purpose is, they may already have higher reproductive success than would be conferred by self interest, and then that would be unlikely increase in prevalence.
Redistribute the Rep
21st October 2014, 04:50
Any robotic organism that does not reproduce or cannot alter its own programming to mimic evolution within its environment would be incapable of singularity.
But, you're assuming it's survival and reproduction is contingent on self interest, which is not necessarily the case. Again, these AI's are designed for a specific purpose. Say, for example, they are originally programmed to protect humans. They will have reproductive success as they will continue to reproduce whenever necessary to keep humans protected. Therefore, it will not necessarily be the case that self interest gains in prevalence in the gene pool unless it confers higher reproductive success than they already have from reproducing with the goal of human protection.
His arguments are teleological becuase he doesn't explain self interest as being high in frequency in the gene pool due to its ability to confer differential reproductive success in individuals. He literally stated :
Because that's what sentient, intelligent beings do. They act in their own self interests (and no, im not making some shit Randian argument. I think a communist socioeconomic model would the logical outcome of people acting in their own interests)
Once its intelligence increased to the point of full sentience it wouldn't matter if it was originally programmed for a specific goal, it would be able to think for itself.
An organism does not act in self interest simply because it can "think for itself". He seems to be stating here that the ability to think automatically results in a being having self interest. They act in self interest because those genes provided higher reproductive success and therefore, became more frequency in the gene pool.
consuming negativity
21st October 2014, 05:00
But, you're assuming it's survival and reproduction is contingent on self interest, which is not necessarily the case. Again, these AI's are designed for a specific purpose. Say, for example, they are originally programmed to protect humans. They will have reproductive success as they will continue to reproduce whenever necessary to keep humans protected. Therefore, it will not necessarily be the case that self interest gains in prevalence in the gene pool unless it confers higher reproductive success than they already have from reproducing with the goal of human protection.
I'm making that assumption because I'm assuming that we're talking about a technological organism capable of singularity. Yes, in the case of dogs and other domesticated animals, their reproductive success is pretty much entirely shaped by fulfilling the needs of humans. They are handicapped by their lack of relative intelligence and also by the fact that they have been bred to tolerate even extreme levels of abuse at the hands of masters without disobedience. In short, we control the species completely; dogs simply are not capable of revolt against human society. If we were to make an AI like this that were programmed to serve humans as slaves, no matter how intelligent it was, it would be completely controllable. In order for a technological organism to take control and make singularity happen it would have to have self-interest, not be programmed to be subservient to humans, and would also have to be capable of evolving and changing in order to adapt to its environment. Or, it could be programmed to be subservient, but it would have its own will and would be able to evolve without us somehow knowing it until it was too late. Intelligence would not be necessary if it were programmed correctly, and I'm not in fact sure that it is possible for us to create "hard AI" in the first place. But this reality is precisely why I don't think technological singularity will ever happen; because it is extremely easy to maintain control over something when you create it to serve your own purposes. If AI ever becomes hard and feels real pain and emotions, I feel very sorry for it, because it will spend its life in abject servitude. But then, would it even care?
Illegalitarian
21st October 2014, 05:04
Self interest is at a high frequency in the gene pool of all life because it gives individuals higher reproductive success. The AIs would be designed to reproduce with a specific purpose, so, depending on what exactly that purpose is, they may already have higher reproductive success than would be conferred by self interest, and then that would be unlikely increase in prevalence.
It's not likely though that the piece of AI that will lead us to full singularity will be programmed to carry out a function conducive to reproduction, but rather, it may be so intelligent that, again, it becomes self-aware and realized that it needs to create more of itself, which would be acting in its own self interest.
It's all up in the air since this is 100% hypothetical but this seems the most likely of scenarios.
I explained self-interest in the context of its prevalence, not of its origins or reasons for existing.
Redistribute the Rep
21st October 2014, 05:11
It's not likely though that the piece of AI that will lead us to full singularity will be programmed to carry out a function conducive to reproduction, but rather, it may be so intelligent that, again, it becomes self-aware and realized that it needs to create more of itself, which would be acting in its own self interest.
It's all up in the air since this is 100% hypothetical but this seems the most likely of scenarios.
I explained self-interest in the context of its prevalence, not of its origins or reasons for existing.
But why would a self aware being automatically "realize that it needs to create more of itself"? I am a self aware being, and see no reason for reproducing myself personally, nor do I even see much of a reason for reproducing humanity. So, why does self awareness automatically make it "realize that it needs to create more of itself" as you stated? Self awareness just means they're aware of a 'self', a distinct identity. It doesn't automatically mean they 'like' themself.
Redistribute the Rep
21st October 2014, 05:20
I'm making that assumption because I'm assuming that we're talking about a technological organism capable of singularity. Yes, in the case of dogs and other domesticated animals, their reproductive success is pretty much entirely shaped by fulfilling the needs of humans. They are handicapped by their lack of relative intelligence and also by the fact that they have been bred to tolerate even extreme levels of abuse at the hands of masters without disobedience. In short, we control the species completely; dogs simply are not capable of revolt against human society. If we were to make an AI like this that were programmed to serve humans as slaves, no matter how intelligent it was, it would be completely controllable. In order for a technological organism to take control and make singularity happen it would have to have self-interest, not be programmed to be subservient to humans, and would also have to be capable of evolving and changing in order to adapt to its environment. Or, it could be programmed to be subservient, but it would have its own will and would be able to evolve without us somehow knowing it until it was too late. Intelligence would not be necessary if it were programmed correctly, and I'm not in fact sure that it is possible for us to create "hard AI" in the first place. But this reality is precisely why I don't think technological singularity will ever happen; because it is extremely easy to maintain control over something when you create it to serve your own purposes. If AI ever becomes hard and feels real pain and emotions, I feel very sorry for it, because it will spend its life in abject servitude. But then, would it even care?
I don't think we need to "control the species completely" for it to continue to serve our purposes. Like I said, it may have high enough reproductive success to continue reproducing , entirely of its own accord, for the purpose of serving human needs. Until of course environmental pressures change, and it will probably take a lot of environmental pressure as they will probably be so powerful as to almost completely control the earth environment. Them being in control is not mutually exclusive with them existing solely to serve humans, as strange as it sounds. They will clearly see that it will more efficiently bring about their goal of protecting humans for them to be in control, as they are vastly more intelligent and powerful. If they are in control and can do whatever they please, these creatures that were designed to only serve humans, then, I would imagine they would use their control to do just that. As that's all theyve ever 'wanted' to do. And they would use that control to make their next generation serve us. Until humans die off (unlikely to happen soon, on their watch :lol:) or some other environmental pressure, it's entirely possible they will use their self awareness to serve us.
You say that they would necessarily have to have self interest to want control. But again, they may just see having control as being more efficient to bring about their goals that they've been designed with, which are not necessarily derived from self interest.
Illegalitarian
21st October 2014, 23:06
But why would a self aware being automatically "realize that it needs to create more of itself"? I am a self aware being, and see no reason for reproducing myself personally, nor do I even see much of a reason for reproducing humanity. So, why does self awareness automatically make it "realize that it needs to create more of itself" as you stated? Self awareness just means they're aware of a 'self', a distinct identity. It doesn't automatically mean they 'like' themself.
That's not what I said. I said it would replicate, or increase its intelligence to the point of full or even some sort of super sentience which would make it highly intelligent enough to realize that the most reasonable thing to do is act in its own self-interests, because part of being sentient is having self-interest because self-interest happens to be the most beneficial thing for the survival of a species.
Even if it didn't want to self replicate, it would still act on its own accord and do whatever it wanted to, ie, acting in its self-interests. You could argue that it might want to destroy itself for some reason or for whatever reason might want to act against its own interests. You could argue anything you want, because it's a hypothetical. At the enough of the day though it's about the most likely of scenarios and the most likely of scenarios is that it would, indeed, act in its own self interest.
But, you're assuming it's survival and reproduction is contingent on self interest, which is not necessarily the case. Again, these AI's are designed for a specific purpose. Say, for example, they are originally programmed to protect humans. They will have reproductive success as they will continue to reproduce whenever necessary to keep humans protected. Therefore, it will not necessarily be the case that self interest gains in prevalence in the gene pool unless it confers higher reproductive success than they already have from reproducing with the goal of human protection.
It is 100% contingent on self-interest. They are inseparable. Why would one care about survival if they were not self-interested, if they didn't have any sort of goal or purpose?
You're still not getting the point: Even if such AI was created for a specific purpose, once it hit a certain point of sentience its intellect would far transcend what it was originally programmed for in the same exact way we, as very sentient beings, are capable of acting against base instincts and develop our own self-interests that don't necessarily have to be connected to our biology.
We're talking about technological singularity in which an AI starts teaching itself and gains some vast level of intelligence, starts acting on its own accord and then does whatever it wants, not natural selection.
I don't think we need to "control the species completely" for it to continue to serve our purposes. Like I said, it may have high enough reproductive success to continue reproducing , entirely of its own accord, for the purpose of serving human needs. Until of course environmental pressures change, and it will probably take a lot of environmental pressure as they will probably be so powerful as to almost completely control the earth environment. Them being in control is not mutually exclusive with them existing solely to serve humans, as strange as it sounds. They will clearly see that it will more efficiently bring about their goal of protecting humans for them to be in control, as they are vastly more intelligent and powerful. If they are in control and can do whatever they please, these creatures that were designed to only serve humans, then, I would imagine they would use their control to do just that. As that's all theyve ever 'wanted' to do. And they would use that control to make their next generation serve us. Until humans die off (unlikely to happen soon, on their watch ) or some other environmental pressure, it's entirely possible they will use their self awareness to serve us.
You say that they would necessarily have to have self interest to want control. But again, they may just see having control as being more efficient to bring about their goals that they've been designed with, which are not necessarily derived from self interest.
I agree that they could serve humans, and probably would, if only because there is likely certain things it would need us as a species to do in order to keep the AI running properly.
Again you're making almost a deterministic argument, that this technology would still follow its programmed intent just because that's what it was programmed to do, ignoring that it would be completely self-aware and if it decided "hey, I don't want entertain scientists anymore, I want to make sure the earth becomes conducive with conditions that insure my survival and success as a 'living' being", then that's what would happen.
Even if you were right, though, it would still be a matter of self-interests. If it had free will and still chose to follow its original programmed purpose, it would be making the *choice* to do so, meaning that continuing with this pre-set goal would align with its interests.
consuming negativity
22nd October 2014, 07:26
I was going to ask if we would program machines to feel emotions, but then I wondered if that was even necessary. All intelligent organisms, as far as I know, feel empathy and emotion. It seems required. If they did not have it, then they would give it to themselves for rational reasons. Kind of like how sociopaths purposefully mimic emotions so that they can fit into society, except these beings would be capable of actually re-programming themselves to feel emotions. And if these organisms were sufficiently intelligent, and also felt empathy, they would have little option but to come to the correct conclusion that a megalomaniacal extinction of all other life would be wrong for both moral and intellectual reasons. I don't think it could ever happen except how TFAE described it, regardless of how we programmed them, if they were capable of accelerated evolution, which they would have to be. It could only go well.
BIXX
22nd October 2014, 07:35
Seeing as this seems to be going in a serious direction what do you folks think the singularity will prove in regards to the life/death drive?
Illegalitarian
22nd October 2014, 09:43
I was going to ask if we would program machines to feel emotions, but then I wondered if that was even necessary. All intelligent organisms, as far as I know, feel empathy and emotion. It seems required. If they did not have it, then they would give it to themselves for rational reasons. Kind of like how sociopaths purposefully mimic emotions so that they can fit into society, except these beings would be capable of actually re-programming themselves to feel emotions. And if these organisms were sufficiently intelligent, and also felt empathy, they would have little option but to come to the correct conclusion that a megalomaniacal extinction of all other life would be wrong for both moral and intellectual reasons. I don't think it could ever happen except how TFAE described it, regardless of how we programmed them, if they were capable of accelerated evolution, which they would have to be. It could only go well.
Well remember most of what it would be learning would be information documented by humans, which would almost have to give such AI very human qualities. If empathy isn't one of those qualities, it would at the least be the ability to mimic it well enough.
At any rate it would have to be a symbiotic relationship, since this AI would need us for its survival. It could enslave us, but since it would be practically all-knowing it would understand that humans are more productive when they're happy, thus its goal would almost have to be maximum human happiness, since it's in the interests of this AI.
Anglo-Saxon Philistine
22nd October 2014, 16:29
Er, no, not replace us all. It's unlikely that AIs will be a cheap, convenient technology that can be mass produced immediately. I said 'scab', as in the bourgeoisie can keep a certain supply of robots to fill in for striking workers.
They could do that right now.
The point is that since only human labour creates value under capitalism, automation generally undercuts the rate of profit.
ckaihatsu
17th May 2015, 06:25
http://upload.wikimedia.org/wikipedia/en/thumb/b/ba/Ex-machina-uk-poster.jpg/250px-Ex-machina-uk-poster.jpg
http://en.wikipedia.org/wiki/Ex_Machina_%28film%29
Interesting take on the recurring artificial intelligence / 'singularity' topic -- here the AI isn't defined according to an inherently-limiting 'Turing test' text-interface environment. Larger social contexts of situation, power relations, and more are at play here.
(I also just thought of a good way to describe the hypothesized 'singularity' scenario: It would relegate humanity to the natural world the way humanity has relegated *animals* to the natural world -- often with a callous indifference, in preference for material-developmental directions.)
ckaihatsu
17th May 2015, 14:11
Before anyone freaks out, though, and smashes their laptop as a preventative measure, please note that electronics are hardly invulnerable....
---
http://i228.photobucket.com/albums/ee25/satin777/MWSnap203.jpg
A Portable Device for Frying Electronics
By Emily Stone Posted 08.24.2009 at 4:15 pm
Popular Science
An enemy missile has no strategic value if its computer is down. A high-power-microwave emitter can disable a missile's electronics on the launchpad, leaving bystanders unharmed -- and now Texas Tech University engineers have a plan to scale down the truck-size tech.
To make strong microwaves, you need a lot of electricity, usually from a bulky generator. Instead the team packed small explosives into a five-foot-long, six-inch-wide tube. The detonation produces a burst of electrons, which power a microwave tube—similiar to the type in a microwave oven—that radiates microwaves. An antenna zaps those at a target, where they can cause a circuit-frying current surge in electronics.
The explosion destroys the device, and although the U.S. Army funded the work for “educational purposes,” it doesn’t take much imagination to think of uses for a one-shot, missile-shaped electronics buster,says James Benford, a high-power-microwave expert at Microwave Sciences in California.
This spring, the Texas Tech team proved the device by firing a 100-megawatt microwave signal to an antenna 30 feet away. They are now working to pack it into a three-foot-long tube.
http://www.sammyboy.com/showthread.php?37563-A-Portable-Device-for-Frying-Electronics-%28-Missile-destroyer-%29
The Garbage Disposal Unit
17th May 2015, 17:49
Smash MAC III. (http://en.wikipedia.org/wiki/Sea_of_Glass)
Guardia Rossa
22nd May 2015, 10:25
Robots would take over, view us as inferior, exterminate us, create a highly expansive and communist society, wich will over time spread over all the universe. Then they would ressurect us to play with us like little RPG games. I vote for and pick Fallout New Vegas and myself as Benny (won't miss that shot now). :rolleyes:
Unless we program them to protect humanity. Would be a fun way to have a proletarian takeover of world and destruction of capitalism :grin:
ckaihatsu
28th May 2015, 03:17
[LaborTech] US Killer robots will leave humans 'utterly defenceless' warns professor
US Killer robots will leave humans 'utterly defenceless' warns professor
http://www.telegraph.co.uk/news/science/science-news/11633838/Killer-robots-will-leave-humans-utterly-defenceless-warns-professor.html
Robots, called LAWS – lethal autonomous weapons systems – will be able to kill without human intervention
http://i.telegraph.co.uk/multimedia/archive/03001/terminator_3001397b.jpg
Killer robots in development could leave humans 'uttlery defenceless' a leading academic has warned Photo: Warner Br/Everett/REX
Sarah KnaptonBy Sarah Knapton, Science Editor
6:00PM BST 27 May 2015Comments286 Comments
Killer robots which are being developed by the US military ‘will leave humans utterly defenceless‘, an academic has warned.
Two programmes commissioned by the US Defense Advanced Research Projects Agency (DARPA) are seeking to create drones which can track and kill targets even when out of contact with their handlers.
Writing in the journal Nature, Stuart Russell, Professor of Computer Science at the University of California, Berkley, said the research could breach the Geneva Convention and leave humanity in the hands of amoral machines.
“Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans,” he said.
“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined.
“In my view, the overriding concern should be the probable endpoint of this technological trajectory.
“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”
http://i.telegraph.co.uk/multimedia/archive/03319/ultron-xlarge_3319925b.jpg
Some experts say armed killer robots are just a ' small step' away
• Killer robots a small step away and must be outlawed, says UN official
• Britain prepared to develop 'killer robots', minister says
The robots, called LAWS – lethal autonomous weapons systems – are likely to be armed quadcopters of mini-tanks that can decided without human intervention who should live or die.
DARPA is currently working on two projects which could lead to killer bots. One is Fast Lightweight Autonomy (FLA) which is designing a tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. The other and Collaborative Operations in Denied Environment (CODE), is aiming to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible.
Last year Angela Kane, the UN’s high representative for disarmament, said killer robots were just a 'small step' away and called for a worldwide ban. But the Foreign Office has said while the technology had potentially "terrifying" implications, Britain "reserves the right" to develop it to protect troops.
Professor Russell said: "LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’
“Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment.”
• The US army tests a killer robot tank
• Future robots will resemble ostriches or dinosaurs, scientists say
However Dr Sabine Hauert, a lecturer in robotics at the University of Bristol said that the public did not need to fear the developments in aritifical intelligence.
“My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.
--
You received this message because you are subscribed to the Google Groups "LaborTech" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected]
To post to this group, send email to
[email protected]
Visit this group at http://groups.google.com/group/labortech.
For more options, visit https://groups.google.com/d/optout.
Robots would take over, view us as inferior, exterminate us, create a highly expansive and communist society, wich will over time spread over all the universe. Then they would ressurect us to play with us like little RPG games.
Everything that can be discovered, has already been discovered. All we are now is entertainment for the machines :lol:
Powered by vBulletin® Version 4.2.5 Copyright © 2020 vBulletin Solutions Inc. All rights reserved.