Log in

View Full Version : Abolitionism and the elimination of suffering



CommunityBeliever
18th September 2010, 08:19
This is copied from the following url:

http://www.abolitionist.com/

Abolitionism
This talk is about suffering and how to get rid of it.
I predict we will abolish suffering throughout the living world.
Our descendants will be animated by gradients of genetically preprogrammed well-being that are orders of magnitude richer than today's peak experiences.

First, I'm going to outline why it's technically feasible to abolish the biological substrates of any kind of unpleasant experience - psychological pain as well as physical pain.
Secondly, I'm going to argue for the overriding moral urgency of the abolitionist project, whether or not one is any kind of ethical utilitarian.
Thirdly, I'm going to argue why a revolution in biotechnology means it's going to happen, albeit not nearly as fast as it should.

WHY IT IS TECHNICALLY FEASIBLE

Sadly, what won't abolish suffering, or at least not on its own, is socio-economic reform, or exponential economic growth, or technological progress in the usual sense, or any of the traditional panaceas for solving the world's ills. Improving the external environment is admirable and important; but such improvement can't recalibrate our hedonic treadmill above a genetically constrained ceiling. Twin studies confirm there is a [partially] heritable set-point of well-being - or ill-being - around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. Unfortunately, attempts to build an ideal society can't overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one's garden. Even if everything that traditional futurists have asked for is delivered - eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc - there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors - or a New Guinea tribesman today - in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. Unenhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations - sadness, anxiety, jealousy, existential angst. Their biology is part of "what it means to be human". Subjectively unpleasant states of consciousness exist because they were genetically adaptive. Each of our core emotions had a distinct signalling role in our evolutionary past: they tended to promote behaviours which enhanced the inclusive fitness of our genes in the ancestral environment.

So if manipulating our external environment alone can never abolish suffering and malaise, what does technically work?

Here are three scenarios in ascending order of sociological plausibility:

a) wireheading
b) utopian designer drugs
c) genetic engineering and - what I want to focus on - the impending reproductive revolution of designer babies

a) Recall wireheading is direct stimulation of the pleasure centres of the brain via implanted electrodes. Intracranial self-stimulation shows no physiological or subjective tolerance i.e. it's just as rewarding after two days as it is after two minutes. Wireheading doesn't harm others; it has a small ecological footprint; it banishes psychological and physical pain; and arguably it's a lot less offensive to human dignity than having sex. Admittedly, lifelong wireheading sounds an appealing prospect only to a handful of severe depressives. But what are the [I]technical arguments against its adoption?

Well, wireheading is not an evolutionarily stable solution: there would be selection pressure against its widespread adoption. Wireheading doesn't promote nurturing behaviour: wireheads, whether human or non-human, don't want to raise baby wireheads. Uniform, indiscriminate bliss in the guise of wireheading or its equivalents would effectively bring the human experiment to an end, at least if it were adopted globally. Direct neurostimulation of the reward centres destroys informational sensitivity to environmental stimuli. So assuming we want to be smart - and become smarter - we have a choice. Intelligent agents can have a motivational structure based on gradients of ill-being, characteristic of some lifelong depressives today. Or intelligent agents can have our current typical mixture of pleasures and pains. Or alternatively, we could have an informational economy of mind based entirely on [adaptive] gradients of cerebral bliss - which I'm going to argue for.

Actually, this dismissal of wireheading may be too quick. In the far future, one can't rule out offloading everything unpleasant or mundane onto inorganic supercomputers, prostheses and robots while we enjoy uniform orgasmic bliss. Or maybe not orgasmic bliss, possibly some other family of ideal states that simply couldn't be improved upon. But that's speculative. Whatever our ultimate destination, it would be more prudent, I think, to aim for both superhappiness and superintelligence - at least until we understand the full implications of what we are doing. There isn't a moral urgency to maximizing superhappiness in the same way as there is to abolishing suffering.

b) The second technical option for eradicating suffering is futuristic designer drugs. In an era of mature post-genomic medicine, will it be possible rationally to design truly ideal pleasure-drugs that deliver lifelong, high-functioning well-being without unacceptable side-effects? "Ideal pleasure drugs" here is just a piece of shorthand. Such drugs can in principle embrace cerebral, empathetic, aesthetic and perhaps spiritual well-being - and not just hedonistic pleasure in the usual one-dimensional and amoral sense.
We're not talking here about recreational euphoriants, which simply activate the negative feedback mechanisms of the brain; nor the shallow, opiated contentment of a Brave New World; nor drugs that induce euphoric mania, with its uncontrolled excitement, loss of critical insight, grandiosity and flight of ideas. Can we develop true wonderdrugs that deliver sublime well-being on a sustainable basis, recalibrating the hedonic treadmill to ensure a high quality of life for everyone?

A lot of people recoil from the word "drugs" - which is understandable given today's noxious street drugs and their uninspiring medical counterparts. Yet even academics and intellectuals in our society typically take the prototypical dumb drug, ethyl alcohol. If it's socially acceptable to take a drug that makes you temporarily happy and stupid, then why not rationally design drugs to make people perpetually happier and smarter? Presumably, in order to limit abuse-potential, one would want any ideal pleasure drug to be akin - in one limited but important sense - to nicotine, where the smoker's brain finely calibrates its optimal level: there is no uncontrolled dose-escalation.

There are of course all kinds of pitfalls to drug-based solutions. Technically, I think these pitfalls can be overcome, though I won't try to show this here. But there is a deeper issue. If there weren't something fundamentally wrong - or at least fundamentally inadequate - with our existing natural state of consciousness bequeathed by evolution, then we wouldn't be so keen to change it. Even when it's not unpleasant, everyday consciousness is mediocre compared to what we call peak experiences. Ordinary everyday consciousness was presumably adaptive in the sense it helped our genes leave more copies of themselves on the African savannah; but why keep it as our default-state indefinitely? Why not change human nature by literally repairing our genetic code?

Again, this dismissal of pharmacological solutions may be too quick. Arguably, utopian designer drugs may always be useful for the fine-grained and readily reversible control of consciousness; and I think designer drugs will be an indispensable tool to explore the disparate varieties of conscious mind. But wouldn't it be better if we were all born with a genetic predisposition to psychological superhealth rather than needing chronic self-medication? Does even the most ardent abolitionist propose to give cocktails of drugs to all children from birth; and then to take such drug cocktails for the rest of our lives?

c) So thirdly, there are genetic solutions, embracing both somatic and germline therapy.
By way of context, today there is a minority of people who are always depressed or dysthymic, albeit to varying degrees. Studies with mono- and dizygotic twins confirm there is a high degree of genetic loading for depression. Conversely, there are some people who are temperamentally optimistic. Beyond the optimists, there is a very small minority of people who are what psychiatrists call hyperthymic. Hyperthymic people aren't manic or bipolar; but by contemporary standards, they are always exceedingly happy, albeit sometimes happier than others. Hyperthymic people respond "appropriately" and adaptively to their environment. Indeed they are characteristically energetic, productive and creative. Even when they are blissful, they aren't "blissed out".

Now what if, as a whole civilisation, we were to opt to become genetically hyperthymic - to adopt a motivational system driven entirely by adaptive gradients of well-being? More radically, as the genetic basis of hedonic tone is understood, might we opt to add multiple extra copies of hyperthymia-promoting genes/allelic combinations and their regulatory promoters - not abolishing homeostasis and the hedonic treadmill but shifting our hedonic set-point to a vastly higher level?

Three points here:
First, this genetic recalibration might seem to be endorsing another kind of uniformity; but it's worth recalling that happier people - and especially hyperdopaminergic people - are typically responsive to a broader range of potentially rewarding stimuli than depressives: they engage in more exploratory behaviour. This makes getting stuck in a sub-optimal rut less likely, both for the enhanced individual and posthuman society as a whole.

Secondly, universal hyperthymia might sound like a gigantic experiment; and in a sense of course it is. But all sexual reproduction is an experiment. We play genetic roulette, shuffling our genes and then throwing the genetic dice. Most of us flinch at the word "eugenics"; but that's what we're effectively practising, crudely and incompetently, when we choose our prospective mates. The difference is that within the next few decades, prospective parents will be able to act progressively more rationally and responsibly in their reproductive decisions. Pre-implantation diagnosis is going to become routine; artificial wombs will release us from the constraints of the human birth-canal; and a revolution in reproductive medicine will begin to replace the old Darwinian lottery. The question is not whether a reproductive revolution is coming, but rather what kinds of being - and what kinds of consciousness - do we want to create?

Thirdly, isn't this reproductive revolution going to be the prerogative of rich elites in the West? Probably not for long. Compare the brief lag between the introduction of, say, mobile phones and their world-wide adoption with the 50 year time-lag between the introduction and world-wide adoption of radio; and the 20 year lag between the introduction and world-wide penetration of television. The time-lag between the initial introduction and global acceptance of new technologies is shrinking rapidly. So of course is the price.

Anyway, one of the advantages of genetically recalibrating the hedonic treadmill rather than abolishing it altogether, at least for the foreseeable future, is that the functional analogues of pain, anxiety, guilt and even depression can be preserved without their nasty raw feels as we understand them today. We can retain the functional analogues of discontent - arguably the motor of progress - and retain the discernment and critical insight lacking in the euphorically manic. Even if hedonic tone is massively enhanced, and even if our reward centres are physically and functionally amplified, then it's still possible in principle to conserve much of our existing preference architecture. If you prefer Mozart to Beethoven, or philosophy to pushpin, then you can still retain this preference ranking even if your hedonic tone is hugely enriched.

Now personally, I think it would be better if our preference architecture were radically changed, and we pursued [please pardon the jargon] a "re-encephalisation of emotion". Evolution via natural selection has left us strongly predisposed to form all manner of dysfunctional preferences that harm both ourselves and others for the benefit of our genes. Recall Genghis Khan: �The greatest happiness is to scatter your enemy, to drive him before you, to see his cities reduced to ashes, to see those who love him shrouded in tears, and to gather into your bosom his wives and daughters.�

Now I'm told academia isn't quite that bad, but even university life has its forms of urbane savagery - its competitive status-seeking and alpha-male dominance rituals: a zero-sum game with many losers. Too many of our preferences reflect nasty behaviours and states of mind that were genetically adaptive in the ancestral environment. Instead, wouldn't it be better if we rewrote our own corrupt code? I've focused here on genetically enhancing hedonic tone. Yet mastery of the biology of emotion means that we'll be able, for instance, to enlarge our capacity for empathy, functionally amplifying mirror neurons and engineering a sustained increase in oxytocin-release to promote trust and sociability. Likewise, we can identify the molecular signatures of, say, spirituality, our aesthetic sense, or our sense of humour - and modulate and "over-express" their psychological machinery too. From an information-theoretic perspective, what is critical to an adaptive, flexible, intelligent response to the world is not our absolute point on a hedonic scale but that we are informationally sensitive to differences. Indeed information theorists sometimes simply define information as a "difference that makes a difference".

However, to stress again, this re-encephalisation of emotion is optional. It's technically feasible to engineer the well-being of all sentience and retain most but not all of our existing preference architecture. The three technical options for abolishing suffering that I've presented - wireheading, designer drugs and genetic engineering - aren't mutually exclusive. Are they exhaustive? I don't know of any other viable options. Some transhumanists believe we could one day all be scanned, digitized and uploaded into inorganic computers and reprogrammed. Well, perhaps, I'm sceptical; but in any case, this proposal doesn't solve the suffering of existing organic life unless we embrace so-called destructive uploading - a Holocaust option I'm not even going to consider here.

WHY IT WILL HAPPEN

OK, it's technically feasible. A world without suffering would be wonderful; and full-blown paradise-engineering even better. But again, so what? It's technically feasible to build a thousand-metre cube of cheddar cheese. Why is a pain-free world going to happen? Perhaps it's just wishful thinking. Perhaps we'll opt to retain the biology of suffering indefinitely2.

The counterargument here is that whether or not one is sympathetic to the abolitionist project, we are heading for a reproductive revolution of designer babies. Prospective parents are soon going to be choosing the characteristics of their future children. We're on the eve of the Post-Darwinian Transition, not in the sense that selection pressure will be any less severe, but evolution will no longer be "blind" and "random": there will no longer be natural selection but unnatural selection. We will be choosing the genetic makeup of our future offspring, selecting and designing alleles and allelic combinations in anticipation of their consequences. There will be selection pressure against nastier alleles and allelic combinations that were adaptive in the ancestral environment.

Unfortunately, this isn't a rigorous argument, but imagine you are choosing the genetic dial-settings for mood - the hedonic set-point - of your future children. What settings would you pick? You might not want gradients of lifelong superhappiness, but the overwhelming bulk of parents will surely want to choose happy children. For a start, they are more fun to raise. Most parents across most cultures say, I think sincerely, that they want their children to be happy. One may be sceptical of parents who say happiness is the only thing they care about for their kids - many parents are highly ambitious. But other things being equal, happiness signals success - possibly the ultimate evolutionary origin of why we value the happiness of our children as well as our own.

Of course the parental choice argument isn't decisive. Not least, it's unclear how many more generations of free reproductive choices lie ahead before radical antiaging technologies force a progressively tighter collective control over our reproductive decisions - since a swelling population of ageless quasi-immortals can't multiply indefinitely in finite physical space. But even if centralised control of reproductive decisions becomes the norm, and procreation itself becomes rare, the selection pressure against primitive Darwinian genotypes will presumably be intense. Thus it's hard to envisage what future social formations would really allow the premeditated creation of any predisposition to depressive or anxiety disorders - or even the "normal" pathologies of unenhanced consciousness.

Non-Human Animals

So far I've focused on suffering in just one species. This restriction of the abolitionist project is parochial; but our anthropocentric bias is deeply rooted. Hunting, killing, and exploiting members of other species enhanced the inclusive fitness of our genes in the ancestral environment. [Here we are more akin to chimpanzees than bonobos.] So unlike, say, the incest taboo, we don't have an innate predisposition to find, say, hunting and exploiting non-human animals wrong. We read that Irene Pepperberg's parrot, with whom we last shared a common ancestor several hundred million years ago, had the mental age of a three-year-old child. But it's still legal for so-called sportsmen to shoot birds for fun. If sportsmen shot babies and toddlers of our own species for fun, they'd be judged criminal sociopaths and locked up.

So there is a contrast: the lead story in the news media is often a terrible case of human child abuse and neglect, an abducted toddler, or abandoned Romanian orphans. Our greatest hate-figures are child abusers and child murderers. Yet we routinely pay for the industrialized mass killing of other sentient beings so we can eat them. We eat meat even though there's a wealth of evidence that functionally, emotionally, intellectually - and critically, in their capacity to suffer - the non-human animals we factory-farm and kill are equivalent to human babies and toddlers.

From a notional God's-eye perspective, I'd argue that morally we should care just as much about the abuse of functionally equivalent non-human animals as we do about members of our own species - about the abuse and killing of a pig as we do about the abuse or killing of a human toddler. This violates our human moral intuitions; but our moral intuitions simply can't be trusted. They reflect our anthropocentric bias - not just a moral limitation but an intellectual and perceptual limitation too. It's not that there are no differences between human and non-human animals, any more than there are no differences between black people and white people, freeborn citizens and slaves, men and women, Jews and gentiles, gays or heterosexuals. The question is rather: are they morally relevant differences? This matters because morally catastrophic consequences can ensue when we latch on to a real but morally irrelevant difference between sentient beings. [Recall how Aristotle, for instance, defended slavery. How could he be so blind?] Our moral intuitions are poisoned by genetic self-interest - they weren't designed to take an impartial God's-eye view. But greater intelligence brings a greater cognitive capacity for empathy - and potentially an extended circle of compassion. Maybe our superintelligent/superempathetic descendants will view non-human animal abuse as no less abhorrent than we view child abuse: a terrible perversion.

True or not, surely we aren't going to give up eating each other? Our self-interested bias is too strong. We like the taste of meat too much. Isn't the notion of global veganism just utopian dreaming?
Perhaps so. Yet within a few decades, the advent of genetically-engineered vatfood means that we can enjoy eating "meat" tastier than anything available today - without any killing and cruelty. As a foretaste of what's in store, the In Vitro Meat Consortium was initiated at a workshop held at the Norwegian University of Life Sciences in June 2007. Critically, growing meat from genetically-engineered single cells is likely to be scalable indefinitely: its global mass consumption is potentially cheaper than using intact non-human animals. Therefore - assuming that for the foreseeable future we retain the cash nexus and market economics - cheap, delicious vatfood is likely to displace the factory-farming and mass-killing of our fellow creatures.

One might wonder sceptically: are most people really going to eat gourmet vatfood, even if it's cheaper and more palatable than flesh from butchered non-human animals?
If we may assume that vatfood is marketed properly, yes. For if we discover that we prefer the taste of vat-grown meat to carcasses of dead animals, then the moral arguments for a cruelty-free diet will probably seem much more compelling than they do at present.

Yet even if we have global veganism, surely there will still be terrible cruelty in Nature? Wildlife documentaries give us a very Bambified view of the living world: it doesn't make good TV spending half an hour showing a non-human animal dying of thirst or hunger, or slowly being asphyxiated and eaten alive by a predator. And surely there has to be a food chain? Nature is cruel; but predators will always be essential on pain of a population explosion and Malthusian catastrophe?

Not so. If we want to, we can use depot contraception, redesign the global ecosystem, and rewrite the vertebrate genome to get rid of suffering in the rest of the natural world too. For non-human animals don't need liberating; they need looking after. We have a duty of care, just as we do to human babies and toddlers, to the old, and the mentally handicapped. This prospect might sound remote; but habitat-destruction means that effectively all that will be left of Nature later this century is our wildlife parks. Just as we don't feed terrified live rodents to snakes in zoos - we recognize that's barbaric - will we really continue to permit cruelties in our terrestrial wildlife parks because they are "natural"?

The last frontier on Planet Earth is the ocean. Intuitively, this might seem to entail too complicated a task. But the exponential growth of computer power and nanorobotic technologies means that we can in theory comprehensively re-engineer the marine ecosystem too. Currently such re-engineering is still impossible; in a few decades, it will be computationally feasible but challenging; eventually, it will be technically trivial. So the question is: will we actually do it? Should we do it - or alternatively should we conserve the Darwinian status quo? Here we are clearly in the realm of speculation. Yet one may appeal to what might be called The Principle Of Weak Benevolence. Unlike the controversial claim that superintelligence entails superempathy, The Principle Of Weak Benevolence doesn't assume that our technologically and cognitively advanced descendants will be any more morally advanced than we are now.

Let's give a concrete example of how the principle applies. If presented today with the choice of buying either free-range or factory-farmed eggs, most consumers will pick the free-range eggs. If battery-farmed eggs are 1 penny cheaper, most people will still pick the cruelty-free option. No, one shouldn't underestimate human malice, spite and bloody-mindedness; but most of us have at least a weak bias towards benevolence. If any non-negligible element of self-sacrifice is involved, for example if free-range eggs cost even 20 pence more, then sadly sales fall off sharply. My point is that if - and it's a big if - the sacrifice involved for the morally apathetic could be made non-existent or trivial, then the abolitionist project can be carried to the furthest reaches of the living world.

EvilRedGuy
18th September 2010, 13:43
I like all these ideas. Science FTW. :cool:

I allso hope peoples see that drugs today are meant for gaining profit and not for the well being of peoples and that they are easily-created drugs(dont have to invent some new healthier ones) and just take those that allready are here, which are obviously bad. I allso go for legalizing drugs.:thumbup1:

CommunityBeliever
18th September 2010, 14:28
drugs today are meant for gaining profit and not for the well being of peoples

Unfortunately it is way too profit focused which is why they rip you off by selling the drugs for way too high a price.


I allso go for legalizing drugsLegalize weed :w00t:

RED DAVE
18th September 2010, 14:49
BRAVE NEW BULLSHIT.

The current task is overthrowing capitalism, not redesigning the human genome. The last thing anyone should want to see is the capitalist class with the capacity for massive human genetic reprogramming.

RED DAVE

CommunityBeliever
18th September 2010, 14:56
The current task is overthrowing capitalism, not redesigning the human genome.

Your enemy is the capitalists not all the people who are not working to overthrow it.

Scientists and intellectuals should not be considered to be enemies.

The ideas here should be evaluated as just that, ideas, not evaluated based upon the person or the intentions of the person creating them.

RED DAVE
18th September 2010, 15:59
Your enemy is the capitalists not all the people who are not working to overthrow it.

Scientists and intellectuals should not be considered to be enemies.

The ideas here should be evaluated as just that, ideas, not evaluated based upon the person or the intentions of the person creating them.I would not trust scientists, working under a capitalist government, to fuck around with basic structures of the human mind.

The morality of many scientists is the morality of their paychecks. If the government wants them to engage in thought control, they'll do it with a smile.

RED DAVE

AnthArmo
18th September 2010, 16:30
If Nietzsche could read this; he would throw an absolute fit.

ÑóẊîöʼn
18th September 2010, 17:36
If Nietzsche could read this; he would throw an absolute fit.

Nietzsche was a sadomasochist?

EvilRedGuy
18th September 2010, 18:25
Why are so many peoples afraid of science and technocracy?

ÑóẊîöʼn
18th September 2010, 19:18
Why are so many peoples afraid of science and technocracy?

People (at least on this forum) aren't afraid of science, although they often conflate a technology with its specific implementation under the capitalist price system.

Technocracy on the other hand, is criticised for not being revolutionary, at least as the term is understood by certain Marxists.

Ovi
18th September 2010, 20:27
Why are so many peoples afraid of science and technocracy?
Yes, because being against implanting electrodes in the brain to make yourself artificially happy makes you a primitivist.

CommunityBeliever
19th September 2010, 03:01
Yes, because being against implanting electrodes in the brain to make yourself artificially happy makes you a primitivist.

Human beings are evolved for reproduction and not for happiness!

So if you want to continue to go through a cycle of boredom, depression, and suffering without treating it you don't have to.

However, to everyone else they can take the designer drugs or implant those electrodes so that they don't have to go through any suffering day to day and have bliss that is beyond our peak experiences.

ÑóẊîöʼn
19th September 2010, 03:56
Human beings are evolved for reproduction and not for happiness!

I think this fundamental issue actually represents a problem for your vision, CB.

You see, unhappiness and suffering in general is evolution's crude, roundabout way of informing organisms that something is wrong. Conversely, happiness is evolution's crude, roundabout way of informing organisms that things are fine, keep doing what you're doing.

If a human is perfectly happy even as things are falling apart around them, where does the motivation come from to stop the terrible thing happening? They may intellectually realise that something is up, but humans are notorious for thinking with their hearts, not their heads so to speak.

If a Friendly AI is around to look after things, then human civilisation simply becomes an eternal day care centre for the terminally elated.


So if you want to continue to go through a cycle of boredom, depression, and suffering without treating it you don't have to.

However, to everyone else they can take the designer drugs or implant those electrodes so that they don't have to go through any suffering day to day and have bliss that is beyond our peak experiences.

Problem is, the promise of instant bliss is highly alluring - just look at the international drugs trade. I'm not entirely sure that most humans wouldn't go for it given the option, leading to the problems I alluded to above.

CommunityBeliever
19th September 2010, 04:35
I think this fundamental issue actually represents a problem for your vision, CB.

You see, unhappiness and suffering in general is evolution's crude, roundabout way of informing organisms that something is wrong. Conversely, happiness is evolution's crude, roundabout way of informing organisms that things are fine, keep doing what you're doing.

You obviously don't understand the gradients of bliss concept, read through the whole article at abolitionist.com

We can retain the functional analogues of discontent - arguably the motor of progress - and retain the discernment and critical insight lacking in the euphorically manic. Even if hedonic tone is massively enhanced, and even if our reward centres are physically and functionally amplified, then it's still possible in principle to conserve much of our existing preference architecture. If you prefer Mozart to Beethoven, or philosophy to pushpin, then you can still retain this preference ranking even if your hedonic tone is hugely enriched.


They may intellectually realise that something is up, but humans are notorious for thinking with their hearts, not their heads so to speak.

This is exactly why AI should take over society and science, they are notorious for thinking with their minds, not their hearts so to speak.


If a Friendly AI is around to look after things, then human civilisation simply becomes an eternal day care centre for the terminally elated.

I see no problem with that. And its not like people won't do things just because they are completely blissful.

ÑóẊîöʼn
19th September 2010, 04:58
You obviously don't understand the gradients of bliss concept, read through the whole article at abolitionist.com

We can retain the functional analogues of discontent - arguably the motor of progress - and retain the discernment and critical insight lacking in the euphorically manic.

I can see how that's possible in principle, but how do we know that human neurology would be able to support such a "patch"? The alterations required could be so deep and wide-ranging that the resulting organism would not be recognisably human, at least in terms of psychology and values.

It might be better off for those who want to experience suffering no longer to simply upload themselves and edit their own code accordingly.


This is exactly why AI should take over society and science, they are notorious for thinking with their minds, not their hearts so to speak.

No, I don't think humans should be disenfranchised from the political process simply for being human. We would still be full participants in society, so it's only fair that we have a say in how it's run.


I see no problem with that. And its not like people won't do things just because they are completely blissful.

Perhaps not everything will be neglected, but the actions of those in a state of bliss that we observe today don't bode well for the "eternally happy" vision of the future.

CommunityBeliever
19th September 2010, 05:05
I can see how that's possible in principle, but how do we know that human neurology would be able to support such a "patch"?

There is multiple feasible methods we have that will be available in the next millenia, such as designer drugs, wireheads, and genetical engineering, this is all explained in the articles if you care to read them at abolitionist.com


It might be better off for those who want to experience suffering no longer to simply upload themselves and edit their own code accordingly.

Your uploaded copy is different from yourself, and I don't think you would want to kill yourself just so you can leave an uploaded copy behind.


Perhaps not everything will be neglected, but the actions of those in a state of bliss that we observe today don't bode well for the "eternally happy" vision of the future.

What exactly do you have against bliss and people that have it??

AnthArmo
19th September 2010, 05:26
Nietzsche was a sadomasochist?

Yes :P

I brought up Nietzsche for a reason. Pretty central to his critique of Christianity and the "Herd Morality" is that pretty much all ethical systems inspired and leading off from Christianity aspire to turn everything and all behavious into that which is "Pleasant and Useful". Christianity aspires to "Abolish Suffering", and whenever suffering is eliminated, that is called "Progress".

He hated this because Suffering is essential to human life. Suffering allows us to grow and develop as individuals. By suffering, we can reflect on the actions that caused the suffering and develop as a result. A world without suffering would be a world of undeveloped, infantilised and weak individuals with no sense of character or real value.

ÑóẊîöʼn
19th September 2010, 06:22
Yes :P

I brought up Nietzsche for a reason. Pretty central to his critique of Christianity and the "Herd Morality" is that pretty much all ethical systems inspired and leading off from Christianity aspire to turn everything and all behavious into that which is "Pleasant and Useful". Christianity aspires to "Abolish Suffering", and whenever suffering is eliminated, that is called "Progress".

I think this has more to do with the anemic qualities of 19th-century Christianity than the religion as a whole. I mean, the whole religion centres around the torture and death of a god in human form, and Christian theologians could be really morbid in their focus on sin and the wages of sin.


He hated this because Suffering is essential to human life. Suffering allows us to grow and develop as individuals. By suffering, we can reflect on the actions that caused the suffering and develop as a result.

Not really. Suffering can also turn people into screaming animals or quivering wrecks. So much for "that which does not kill you makes you stronger".


A world without suffering would be a world of undeveloped, infantilised and weak individuals with no sense of character or real value.

In all fairness we don't actually know that.

ÑóẊîöʼn
19th September 2010, 06:29
There is multiple feasible methods we have that will be available in the next millenia, such as designer drugs, wireheads, and genetical engineering, this is all explained in the articles if you care to read them at abolitionist.com

I consider all those solutions sub-optimal. They're not radical enough.


Your uploaded copy is different from yourself, and I don't think you would want to kill yourself just so you can leave an uploaded copy behind.

Look up the Ship of Theseus and get back to me.


What exactly do you have against bliss and people that have it??

I'm sure bliss-heads would be wonderful people, at least compared to some people today. But I remain unconvinced that such people should form the bulk of society while AIs have the real levers of power.

CommunityBeliever
19th September 2010, 09:00
Look up the Ship of Theseus and get back to me.

This is very speculative because it remains to seen that we will ever be able to perfectly accurately upload a mind, so it is likely some or many of your features will be lost in that uploading process although scientific advancements may make it increasingly accurate.

Still, any digital copy of you will be be changed and distinct from you as it will operate on an entirely different architecture from you, the electronic one the processing that occurs on that architecture will operate differently due it to its very nature.


I'm sure bliss-heads would be wonderful people, at least compared to some people today. But I remain unconvinced that such people should form the bulk of society

I am still not certain as to what your objection is to gradients of bliss.


AIs have the real levers of power.

AIs? Why plural?

EvilRedGuy
19th September 2010, 12:02
Yes, because being against implanting electrodes in the brain to make yourself artificially happy makes you a primitivist.

What has that to do with what i said?
Im talking about Technocracy, not implanting electrodes in anyones head.

CommunityBeliever
19th September 2010, 12:10
Technocracy on the other hand, is criticised for not being revolutionary, at least as the term is understood by certain Marxists.

Some people oppose Technocracy because they aim to abolish class of people known as technical experts (technocracies ruling class) and replace them with a collective super-AI :cool:

Ocean Seal
19th September 2010, 14:33
Engineering happiness doesn't seem like a socialist goal. It seems like something that the capitalist class would use to keep the working class oppressed and from reaching its full potential. Let's face it, if we drug or engineer our happiness we are not changing anything. We are allowing the ruling class to step on us and become parasitic without us even seeing it. We must always have an objective eye for our material world, not some dreamland.

ÑóẊîöʼn
19th September 2010, 17:18
This is very speculative because it remains to seen that we will ever be able to perfectly accurately upload a mind, so it is likely some or many of your features will be lost in that uploading process although scientific advancements may make it increasingly accurate.

Perfection is not required, since human personalities are fuzzy and mutable in any case.


Still, any digital copy of you will be be changed and distinct from you as it will operate on an entirely different architecture from you, the electronic one the processing that occurs on that architecture will operate differently due it to its very nature.

That doesn't matter. As long as the process is transformative, there will be a causal link between the uploadee's human and digital forms, and as a result there will be a subjective continuity of consciousness.


I am still not certain as to what your objection is to gradients of bliss.

You have yet to prove that motivation won't be an issue.


AIs? Why plural?

Why not? If humans can build one AI, why not more? If humans can create AI, why can't AI create other AI? If anything, AI will find it easier to reproduce themselves.


Some people oppose Technocracy because they aim to abolish class of people known as technical experts (technocracies ruling class) and replace them with a collective super-AI :cool:

There is no ruling class in technocracy, since everyone has the same relationship to the means of production; everyone is functionally vital and everyone has the same degree of access to the products of society.

bcbm
19th September 2010, 20:45
man the unabomber manifesto gets more appealing every day. fuck this wingnut shit

ÑóẊîöʼn
19th September 2010, 20:50
man the unabomber manifesto gets more appealing every day. fuck this wingnut shit

I don't think we should take any accusations of wingnuttery seriously from someone who says the Unabomber Manifesto appeals to them. :rolleyes:

If you have any actual criticisms, it would be a good idea to bring them forth for everybody's evaluation instead of posting pointlessly provocative one-liners.

bcbm
19th September 2010, 20:56
I don't think we should take any accusations of wingnuttery seriously from someone who says the Unabomber Manifesto appeals to them. :rolleyes:

lol irony lol

and ted kacyznski's manifesto touches on exactly the kind of shit this article is talking about and seems more reasonable than, say, believing we should all be happy bots ruled by a fucking computer pumping out designer babies.


The industrial-technological system may survive or it may break down. If it survives, it may eventually achieve a low level of physical and psychological suffering, but only after passing through a long and very painful period of adjustment and only at the cost of permanently reducing human beings and many other living organisms to engineered products and mere cogs in the social machine.


imagine you are choosing the genetic dial-settings for mood - the hedonic set-point - of your future children

Not so. If we want to, we can use depot contraception, redesign the global ecosystem, and rewrite the vertebrate genome


It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions.


they aim to abolish class of people known as technical experts (technocracies ruling class) and replace them with a collective super-AI

hmm.


If you have any actual criticisms, it would be a good idea to bring them forth for everybody's evaluation instead of posting pointlessly provocative one-liners.the ability to feel pain both psychological and physical is an evolutionary advantage and i trust millions of years of evolution just a wee bit more than some mad scientists who want to dope everybody up. red dave's point about the ruling class using this against workers is also pretty much spot on (in the article they talk specifically about how quickly this technology could be exported to poorer countries, i don't think you need much brilliance to guess what would happen next) and in a communist society i imagine our lives would be decent enough that we wouldn't need to alter our genes to be happy. i mean what the fuck, seriously?

bricolage
19th September 2010, 21:06
There's a doctor who episode where this happens, they have pills called 'happy' and 'forget' and the like, people take them to forget that their parents disappeared into a continuous motorway that they never get out of to try and escape the earth's surface which is poisoned or something, air pollution maybe. Its a pretty bleak world. Doctor who is still the shit though.

ÑóẊîöʼn
19th September 2010, 21:24
lol irony lol

and ted kacyznski's ideas make a lot more sense than, say, believing we should all be happy bots ruled by a fucking computer.

I think they're both far-fetched, but for different reasons.


the ability to feel pain both psychological and physical is an evolutionary advantage and i trust millions of years of evolution just a wee bit more than some mad scientists who want to dope everybody up.

That's better. Not so difficult, was it?

bcbm
19th September 2010, 21:40
i updated my rant.


I think they're both far-fetched, but for different reasons.

industrial society and its future is basically about exactly what this article wants to happen.


That's better. Not so difficult, was it?

eat me

bcbm
19th September 2010, 22:06
i also think its offensive to call this "abolitionism"

bricolage
19th September 2010, 22:27
"The Doctor and Martha travel to the far future, where a deadly trap awaits them. Deadly pharmacists lurk the streets, and an old enemy awaits them in the darkness."
http://tardis.wikia.com/wiki/Gridlock

Ha! I never realised it was called 'pharmacytown'.
http://tardis.wikia.com/wiki/Mood_drug

Ravachol
20th September 2010, 00:06
I haven't read the entire thread and I can't be arsed to do so but I'll give my opinion anyway.

I've said this many times before and I'll do so again: I'm a technophile, I love technology and it's possiblities.

That being said, I can't understand how anyone can divorce technological development from the material base from which is develops. It is not neutral and will not be used neutrally. Technology (and most scientific research, save for some very abstract and pure areas) is developed with a goal and purpose. Considering most universities and research facilities are just as well integrated in the fabric and discours of Capital they function according to it's logic. WHAT is researched, HOW it is researched and for WHOM it is researched and developed is determined by the ticking cogs of Capital's social machine.

Under Capital, technology is a resource like any other and is developed for a purpose and under the control of those who supervise and guide it's development: Capital. I see this every single day at my own university and many others. Just like factories produce under the dominance of Capital so does the 'scientific factory' produce under it's iron hegemony.

Let's take Artificial Intelligence as an example. Sure, there's loads of amateur research going on and some interesting purely academic work. But what areas of AI research are way ahead of all others?
Neural Networks for relational database mining, image recognition (used in CCTV cameras),etc,etc.
The direction of research is determined by Capital.

This isn't the 'fault' of individual researchers or even of the institutions they work for, just like it isn't the fault of some assembly plant worker that he happens to assemble parts of Tanks that will crush civilians in Iraq or Afghanistan.

Should we 'stop technological research'? I don't think so. Primitivism is by no means an answer.

But we should realise blind advocacy of science without the material context is playing cheerleader for the technological wing of Capital. If there is any sort of new 'luddism' that is necessary it isn't to be rooted in the rejection of the concept of 'technology', but it ought to be rooted in the sabotage of technology at the service of Capital and at the (re-)appropriation of technology for struggle.

CommunityBeliever
20th September 2010, 00:09
Engineering happiness doesn't seem like a socialist goal.

Indeed, this has nothing to do with socialism, just ethics. Ethics and the question of rather or not you want to get rid of all suffering and make everyone happy.


and in a communist society i imagine our lives would be decent enough that we wouldn't need to alter our genes to be happy. i mean what the fuck, seriously?

We would certainly be happier but there is no "happy enough", science shows us that people will still go through a cycle of depression because we did not evolve to feel happy but to increase reproduction.


It seems like something that the capitalist class would use to keep the working class oppressed and from reaching its full potential

Technophobes could say that about any technology that has come into existence since primitive communism (http://en.wikipedia.org/wiki/Primitive_communism), like guns, nuclear weapons, etc. Does this mean you want us to stop developing all technologies?

And GPS certainly stifles any attempt that people would have at becoming revolutionary because now if you want to start an uprising, the police will track you and based on that they will go to everyone you talked with.

The Internet and computers certainly haven't helped revolutionary either, first because they track those too, and secondly because all it has done is make the workers that have access to it more divided and apathetic.

Objective Outlook


You have yet to prove that motivation won't be an issue.

You have yet to prove that you have read and comprehend the project, you see this project wants to keep a hedonic treadmill, which will allow us to still be able to determine one thing we like from another.

Three points here:
First, this genetic recalibration might seem to be endorsing another kind of uniformity; but it's worth recalling that happier people - and especially hyperdopaminergic people - are typically responsive to a broader range of potentially rewarding stimuli than depressives: they engage in more exploratory behaviour. This makes getting stuck in a sub-optimal rut less likely, both for the enhanced individual and posthuman society as a whole.

Anyway, one of the advantages of genetically recalibrating the hedonic treadmill rather than abolishing it altogether, at least for the foreseeable future, is that the functional analogues of pain, anxiety, guilt and even depression can be preserved without their nasty raw feels as we understand them today. We can retain the functional analogues of discontent - arguably the motor of progress - and retain the discernment and critical insight lacking in the euphorically manic. Even if hedonic tone is massively enhanced, and even if our reward centres are physically and functionally amplified, then it's still possible in principle to conserve much of our existing preference architecture. If you prefer Mozart to Beethoven, or philosophy to pushpin, then you can still retain this preference ranking even if your hedonic tone is hugely enriched.


We are allowing the ruling class to step on us and become parasitic without us even seeing it. We must always have an objective eye for our material world, not some dreamland.

Part of this project is also increasing peoples intelligence so that people will have a more objective outlook on the world. And since there is 'gradients of bliss' people will still have a method of distinguishing something they like from something that they don't really like.

Whatever our ultimate destination, it would be more prudent, I think, to aim for both superhappiness and superintelligence


the ability to feel pain both psychological and physical is an evolutionary advantage and i trust millions of years of evolution just a wee bit more than some mad scientists who want to dope everybody up.

To evolution it was also an advantage for us to put reproduction before our happiness.

Screw evolution! It is time that we took care of ourselves - it is time that we overcome the horribly slow process of evolution and that we use biotechnology to design ourselves.

We will use transhumanist biotechnology to not only abolish suffering, but also to abolish aging, disability, and disease. Furthermore, we will have occular implants to greatly increase our range of vision and so that we can directly browse the web at any time, as well as greatly enhanced limbs, armor, and physical strength.

Quail
20th September 2010, 00:20
I'm not comfortable with the idea of being permanently drugged up to be happy. Personally, I've had some shit times in my life and while a lot of the time I do wish there could be a magical way out, I appreciate the good times a hell of a lot more because I have suffered. I don't think that someone could experience a decent life without feeling a full range of emotions. I'm all for better psychiatric drugs or whatever, but being pacified with happy pills as an AI makes decisions for me doesn't sound like my cup of tea.

It's not a very scientific argument, but just thought I'd give my opinion.

Oh and on more thing:

Furthermore, we will have occular implants to greatly increase our range of vision and so that we can directly browse the web at any time, as well as greatly enhanced limbs, armor, and physical strength.

Like this? :lol:
http://t1.gstatic.com/images?q=tbn:ANd9GcSedhd8M0HdBdVu1PMx9B-qsduNrLp9qnoSm2JSL2z-L466NGE&t=1&usg=__EEbtAX7wscT_OHekUQllFRn3itQ=

CommunityBeliever
20th September 2010, 00:34
There is no ruling class in technocracy, since everyone has the same relationship to the means of production; everyone is functionally vital and everyone has the same degree of access to the products of society.

http://www.revleft.com/vb/showpost.php?p=1869388&postcount=41


Why not? If humans can build one AI, why not more? If humans can create AI, why can't AI create other AI? If anything, AI will find it easier to reproduce themselves.

http://www.revleft.com/vb/strong-ai-and-t141556/index.html?p=1869397#post1869397


I don't think that someone could experience a decent life without feeling a full range of emotions.

There would be a "hedonic treadmill"

Quail
20th September 2010, 00:40
There would be a "hedonic treadmill"

But if you were always happy, or should I say, above a certain level of happiness, I don't see how you can actually feel a full range of emotions?

The rawness of some emotions are good for art. A lot of my favourite music is written by people who weren't happy, and if people can't appreciate the rawness of depression, etc, then I think that a lot of art would suffer.

Besides, a communist society would be a much less depressing place. I don't think it is really necessary to look into ways of making everyone super-happy. There's nothing wrong with using mood-altering drugs, but I don't think I'd like to feel like that constantly.

ÑóẊîöʼn
20th September 2010, 00:44
http://www.revleft.com/vb/showpost.php?p=1869388&postcount=41

There's nothing in the etymology of Technocracy that suggests that skill must by necessity be invested in a ruling minority.

CommunityBeliever
20th September 2010, 00:45
There's nothing in the etymology of Technocracy that suggests that skill must by necessity be invested in a ruling minority.

Please post what you have to say (hopefully in detail) in that Technocracy thread since this thread has nothing to do with Technocracy.


But if you were always happy, or should I say, above a certain level of happiness, I don't see how you can actually feel a full range of emotions?

I don't see why not.


There's nothing wrong with using mood-altering drugs, but I don't think I'd like to feel like that constantly.

Nobody is going to force technology down your throat, however, perhaps your descendent's will be more receptive.

ÑóẊîöʼn
20th September 2010, 00:53
Please post what you have to say (hopefully in detail) in that Technocracy thread since this thread has nothing to do with Technocracy.

Backseat moderating is bad enough, but you compound the offence by telling me to get back on topic from a derailment I did not start!

Please refrain from doing that again.

Quail
20th September 2010, 00:57
I don't see why not.

Nobody is going to force technology down your throat, however, perhaps your descendent's will be more receptive.
Perhaps they will, who knows. I would argue that you wouldn't be able to feel the full range of emotions, because there would always be an underlying feeling of things being okay.

Also, what is your opinion about the possible impact of taking the edge off of bad emotions on the arts?

CommunityBeliever
20th September 2010, 01:08
Also, what is your opinion about the possible impact of taking the edge off of bad emotions on the arts?

You are right, they may have an impact, however, my morals tell me that getting rid of suffering and pain is more important, so that leads me to the question is it worth it to suffer just so you can create an art piece?

* One possible solution to this problem is that the Robot Collective may spawn a thread in the main processing pool that simulates conditions of suffering and other feelings in order to produce the works in which you describe.


Perhaps they will, who knows. I would argue that you wouldn't be able to feel the full range of emotions, because there would always be an underlying feeling of things being okay.

Well there would be a feeling of blissfulness and happiness in those that choose to become wireheads, others may keep their primitive forms and just use designer drugs whenever they feel depressed.

So they would have happiness and not necessarily a feeling that everything is okay. The increased intelligence that we create due to biotechnology will offset whatever desensitivity might come from the increased happiness, so they would be able to tell if something is fucked up because they would be ten times as smart as we are now.

Quail
20th September 2010, 01:20
You are right, they may have an impact, however, my morals tell me that getting rid of suffering and pain is more important.

Is it worth it to suffer just so you can create an art piece?

Some of the suffering that people live through makes them who they are, makes them more understanding and tolerant, and also makes them want to fight for an end to that kind of suffering. The suffering of humans under capitalism is what made me a communist. Art is actually a release for my emotions, and if I didn't have emotional issues, I don't think that I would be the creative, arty person I am today.


Well there would be a feeling of blissfulness and happiness in those that choose to become wireheads, others may keep their primitive forms and just use designer drugs whenever they feel depressed.

So they would have happiness and not necessarily a feeling that everything is okay. The increased intelligence that we create due to biotechnology will offset whatever desensitivity might come from the increased happiness, so they would be able to tell if something is fucked up because they would be ten times as smart as we are now.

Two points here:

1. Drugs should never be the first answer to depression/anxiety/whatever. People need to learn to recover, otherwise they will always suffer those negative emotions. Some magical happiness drug isn't a good answer to people being unhappy. Treat the causes of their unhappiness, not the symptoms.

2. The blissfulness and happiness to take the edge off of raw emotions is what I was talking about. If there was always a background blissfulness, you wouldn't be able to experience other emotions properly.

CommunityBeliever
20th September 2010, 01:29
The suffering of humans under capitalism is what made me a communist.

Same here.


Some magical happiness drug isn't a good answer to people being unhappy. Treat the causes of their unhappiness, not the symptoms.

I am not ruling out treating the causes of unhappiness, however, with this thread I am describing a way of treating one cause of unhappiness which is the flaws in our neural architecture.


If there was always a background blissfulness, you wouldn't be able to experience other emotions properly.

If you want to you don't have to be a wirehead that doesn't suffer, however, what remains to be clarified is why you would ever want to want to suffer ever.

You would still be able to be objective because one of the things we will do is increase the intelligence of individuals using the same biotechnology...

Quail
20th September 2010, 01:36
I am not ruling out treating the causes of unhappiness, however, with this thread I am describing a way of treating one cause of unhappiness which is the flaws in our neural architecture.

Fair enough.


If you want to you don't have to be a wirehead that doesn't suffer, however, what remains to be clarified is why you would ever want to want to suffer ever.

You would still be able to be objective because one of the things we will do is increase the intelligence of individuals using the same biotechnology...
Suffering is part of being human. Feeling good and feeling bad help people to grow into caring individuals. Feeling bad makes the good bits better. For example, when my parents take my baby for the weekend, I feel down because I miss him, but I wouldn't not want to feel down when he wasn't there, because then it wouldn't be so wonderful when he came back.

Suffering is also quite difficult to define I think. Suffering is basically when you're less happy than you were, so if you always felt blissful, you would "suffer" when you weren't totally euphoric. I think that any drug that made you constantly happy would be very addictive psychologically, so even if people wanted to stop using it, I don't know whether they would be able to.

CommunityBeliever
20th September 2010, 01:52
Suffering is part of being human. Feeling good and feeling bad help people to grow into caring individuals. Feeling bad makes the good bits better. For example, when my parents take my baby for the weekend, I feel down because I miss him, but I wouldn't not want to feel down when he wasn't there, because then it wouldn't be so wonderful when he came back.* From abolitionist.com:

"It's tempting to suppose that purely "psychological" pain - loneliness, rejection, existential angst, grief, anxiety, depression - can't be as atrocious as extreme physical pain; yet the reason over 800,000 people in the world take their own lives every year is mainly psychological distress. It's not that other things - great art, friendship, social justice, a sense of humour, cultivating excellence of character, academic scholarship, etc - aren't valuable; but rather when intense physical or psychological distress intrudes - either in one's own life or that of a loved one - we recognize that this intense pain has immediate priority and urgency. and If you are in agony after catching your hand in the door, then you'd give short shrift to someone who urged you to remember the finer things in life. If you're distraught after an unhappy love affair, then you don't want to be tactlessly reminded it's a beautiful day outside.

OK, while it lasts, extreme pain or psychological distress has an urgency and priority that overrides the rest of one's life projects; but so what? When the misery passes, why not just get on with one's life as before?

Well, natural science aspires to "a view from nowhere", a notional God's-eye view. Physics tells us that no here-and-now is privileged over any other; all are equally real. Science and technology are shortly going to give us Godlike powers over the entire living world to match this Godlike perspective. I argue that so long as there is any sentient being who is undergoing suffering similar to our distress, that suffering should be tackled with the same priority and urgency as if it were one's own pain or the pain of a loved one. With power comes complicity. Godlike powers carry godlike responsibilities. Thus the existence of suffering 200 years ago, for instance, may indeed have been terrible; but it's not clear that such suffering can sensibly be called "immoral" - because there wasn't much that could be done about it. But thanks to biotechnology, now there is - or shortly will be. Over the next few centuries, suffering of any kind is going to become optional."

* From paradise engineering (http://www.paradise-engineering.com/heav3.htm) site:

"...two hundred years ago, before the development of potent analgesics and surgical anaesthetics, the notion that so-called "physical" pain could be banished from most people's lives would have seemed crankish. Most of us in the affluent western nations now take its daily absence for granted - though opiophobia prolongs its presence in the lives of a minority. The prospect that what we describe as "mental" pain, too, could ever be eradicated is equally counter-intuitive. The feasibility of its abolition via biotechnology turns its deliberate retention into an issue of social policy and ethical choice..."



I think that any drug that made you constantly happy would be very addictive psychologically, so even if people wanted to stop using it, I don't know whether they would be able to.

Which is the precise reason that most people will embrace the wirehead technology and make them happy all of the time.

bcbm
20th September 2010, 01:57
We would certainly be happier but there is no "happy enough", science shows us that people will still go through a cycle of depression because we did not evolve to feel happy but to increase reproduction.

i don't see anything particularly wrong with being depressed sometimes, certainly not enough to warrant manufacturing happiness.


Technophobes could say that about any technology that has come into existence since primitive communism (http://en.wikipedia.org/wiki/Primitive_communism), like guns, nuclear weapons, etc. Does this mean you want us to stop developing all technologies?

under capitalism? i don't think it would hurt but thats an impossibility anyway.


To evolution it was also an advantage for us to put reproduction before our happiness.

i don't think evolution has its own agenda.


Screw evolution! It is time that we took care of ourselves - it is time that we overcome the horribly slow process of evolution and that we use biotechnology to design ourselves.

We will use transhumanist biotechnology to not only abolish suffering, but also to abolish aging, disability, and disease. Furthermore, we will have occular implants to greatly increase our range of vision and so that we can directly browse the web at any time, as well as greatly enhanced limbs, armor, and physical strength.

gross.

CommunityBeliever
20th September 2010, 01:59
i don't see anything particularly wrong with being depressed sometimes, certainly not enough to warrant manufacturing happiness.

You can suffer then, feel free to do so, however, I do not think you can speak for everybody else.

The biotechnology and science behind show that this is just an ethical and social issue, do we want to eliminate suffering from most lives?

bcbm
20th September 2010, 02:01
You can suffer then, feel free to do so, however, I do not think you can speak for everybody else.

The biotechnology and science behind show that this is just an ethical and social issue, do we want to eliminate suffering from most lives?

suffering will be eliminated by destroying the social conditions that create it, not by gene therapy and pills.

CommunityBeliever
20th September 2010, 02:06
suffering will be eliminated by destroying the social conditions that create it, not by gene therapy and pills.

No it won't. Please read the whole article people, and study the science behind this because this project is based on biotechnology and solid scientific theories.

Twin studies confirm there is a [partially] heritable set-point of well-being - or ill-being - around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. [It's possible to lower our hedonic set-point by inflicting prolonged uncontrolled stress; but even this re-set is not as easy as it sounds: suicide-rates typically go down in wartime; and six months after a quadriplegia-inducing accident, studies1 suggest that we are typically neither more nor less unhappy than we were before the catastrophic event.] Unfortunately, attempts to build an ideal society can't overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one's garden. Even if everything that traditional futurists have asked for is delivered - eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc - there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors - or a New Guinea tribesman today - in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. Unenhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations - sadness, anxiety, jealousy, existential angst.

bcbm
20th September 2010, 02:09
i read the article, its scifi horseshit.

this is an invasion
20th September 2010, 02:09
I understand communism to be the creation of a truly human community.


Can someone explain to me how this Darth Vader shit fits into that?

CommunityBeliever
20th September 2010, 02:14
i read the article, its scifi horseshit.

Thanks for presenting this scientific evidence, I will certainly consider it next time I make a decision.

bcbm
20th September 2010, 02:35
anytime champ

Quail
20th September 2010, 02:52
Which is the precise reason that most people will embrace the wirehead technology and make them happy all of the time.

You mean, the precise reason why people will try it, get hooked and be unable to stop using it? Why don't we just say fuck biotechnology and dope everyone up on heroin? I hear that stuff makes you feel pretty good.

CommunityBeliever
20th September 2010, 02:55
Why don't we just say fuck biotechnology and dope everyone up on heroin?

Heroin has many negative side effects so people tend to refrain from using it.

bcbm
20th September 2010, 03:11
so this idea is heroin without the same negative side effects, basically.

Kuppo Shakur
20th September 2010, 03:36
How about we just genocide ourselves? Put an end to even the possibility of suffering.

CommunityBeliever
20th September 2010, 04:34
How about we just genocide ourselves? Put an end to even the possibility of suffering.

That would also get rid of capitalism! You truly are a genius, why don't we put you in charge of the world :p

Amphictyonis
20th September 2010, 22:40
Without suffering how will we recognize joy?

Kuppo Shakur
20th September 2010, 23:01
That would also get rid of capitalism! You truly are a genius, why don't we put you in charge of the world :p
Right back atcha, bro.;)

Without suffering how will we recognize joy?
We won't have to! The God AI will do that for us!

Amphictyonis
20th September 2010, 23:07
i read the article, its scifi horseshit.

http://www.youtube.com/watch?v=nwuy2t6025k&feature=related