Loading...
Welcome to Anarcho-Punk.net community ! Please register or login to participate in the forums.   Ⓐ//Ⓔ

Anarchism, Transhumanism, and the Singularity

Discussion in 'General political debates' started by NGNM85, Sep 9, 2009.

  1. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I thought this might be a perfect time to discuss this as the Singularity "movement" has been gaining ground with the opening of the Singularity Institute, and a coming, international Singularity summit in New York. For those who are unaware, "The Singularity" is a projected transformation of society and humankind. Named after a cosmological event where gravity becomes infinite. The Singularity is not a fixed concept, but has multiple interpretations. The term was first coined by author, math professor, and scientist, Vernor Vinge. All Singularity theories rest on the idea of a projected point in the future, perhaps the near future, where scientific and technological development reach a point of a kind of feedback loop and increase rapidly and exponentially.
    Vinge's conception rested on the idea that the precursor to the singularity would be the creation of the first artificial intelligence that was smarter than humans, which would then theoretically be able to design "smarter" machines and so on and so forth. There is another significant school of thought which maintains that this event could take place in human beings, through artificial augmentation through either bioengineering, nanotechnology, etc., which eventually could theoretically create a human intelligence great enough to spark the aforementioned feedback loop and essentially evolve into something else altogether. This specific process is known as "Transhumanism".
    Scientist and inventor Raymond Kurzwiel has described the technological progress of humankind through what he calls "The Law of Accelerating Returns", being the basic truism that scientific/technological progress enables future science and technology, etc. This is exemplified in Moore's Law, the principle that computing power essentially doubles every two years.
    The Singularity has it's detractors, from neo-primativists, to people who express grave concern about the negative implications of such powerful technology, to those who simply dismiss it as "the rapture for nerds." For the uninitiated, here are a few links:
    Wikipedia article explaining the Singularity
    http://en.wikipedia.org/wiki/Technological_singularity
    A Singularity FAQ
    http://jwbats.blogspot.com/2005/07/sing ... mmies.html
    The official site of the World Transhumanist Association, now known as Humanity +
    http://www.humanityplus.org/learn/philo ... eclaration

    Obviously we live in a constantly changing world. While there are absolutes, and fundamentals are important, I’ve increasingly felt that Anarchist thought is somewhat stuck in the turn of the century. How do the ideas of classical Anarchism apply to these sweeping social and technological changes? How should we revise our ideas/tactics to respond to these events? What are the liberatory aspects of these new technologies, rather, how can they, or should they, be used to promote equity, liberty and increase the quality of life?
     

  2. Spider

    Spider Experienced Member Experienced member


    90

    1

    0

    Sep 3, 2009
     
    Fuck.

    This disturbs me somewhat.

    One thing about science that freaks me out is that scientists (vast generalisation warning) tend to be extremely caught up in their research in an unwavering quest for intellectual glory and government funding, and completely oblivious to the potential social consequences of them realising their research objectives.

    Case in point- LHC in geneva. May possibly produce black hole which destroys mankind. But they're willing to use it anyway for the chance that they may prove/disprove some theories about the universe and get some nobel prizes.

    Being that these scientists are quite often social outcasts who put all of their time and energy into furthering scientific and technological knowlege, is it not plausible that they don't put any weight on the Lives of the people who live in the universe they are trying to understand?

    Who's to say robots will be any different? If they are emotionlessly seeking to further their knowlege and intelligence, why would they let inferior humans slow them down in their quest for omnipotence.

    "If we program them friendly it will be ok" My arse, once the singularity occurs they'll be programming themselves however they like.

    I think A major point to do with robotics though is how enormous a blow it will be to any idea of social equity and justice. When you control the company that is creating robots capable of doing any task a human can do, better, for no pay, you control all the money in the world. And everybody is milked of every last cent.

    This idea of AI is just one nightmare after another and as humanitarians/anticapitalists we should be doing anything in our power to stop it eventuating.
     
  3. ghost in the void

    ghost in the void Experienced Member Experienced member Forum Member


    148

    0

    1

    Aug 8, 2009
     
    i don't really live in fear of computers gaining sentience.i do however think people should realise there's a very strong possibility of this happening.

    humans have made a complete mess of our time on this planet. assuming "the singularity" means AI would be cold, hard, calculating intelligence, then the biggest worry is it would seek to remove us because we are simply inefficient at living on this planet sustainably. at present at least.

    i'm not sure what drives me to often mention fiction, or sci-fi for that matter, but once again i'll indulge myself. this issue has been addressed in the prototypical cyberpunk book, "neuromancer", by william gibson. in it (spoiler alert!) an AI basicly dupes a bunch of humans into making it omnipotent, making them hack the technological failsafes set up to prevent this from happening. by the end of the book, or perhaps in one of it's sequels, it is revealed this demigodlike machine has been communicating with others like it, the combined technological data of other civilisations elsewhere in the universe. "ghosts in the machine".

    if "the singularity" did manifest i think something like this is much more likely than otherwise. it wouldn't be "us vs the machines" like apocalyptic movies such as the terminator series or the matrix, it would be us and THE machine. it would be one thing essentially, unless it choose not to be, which would be inefficient as far as i can see. we would matter as much to it as a plague of insects (the largest animal biomass on earth) matters to us. plagues are dangerous, but controllable. so are we.

    i don't think a powerful AI like this would get rid of us, it would be pointless in the grand scheme of things. if i was it i'd keep us around just to have something to laugh at. hopefully "the singularity" gets a sense of humour. judging by the content of the internet, it would nearly have to, because we are so damn silly as a species, and we continually prove this again and again.
     
  4. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    There is some truth to this, but I think the greater issue is the monolithic institutions (governments, corporations, etc.) who seek to employ technology to meet their objectives, which may be totally insane. A perfect example is the atomic bomb. einstein wrote the president a letter pleading with him not to use the bomb, Oppenheimer became a tireless activist against nuclear proliferation. While there should always be ethical analysis, and the consequences should be weighed heavily, honestly I think the world would be much better if it was run by the scientists than the politicians and executives.

    That’s actually bogus. Cosmic rays colliding with our atmosphere or white dwarfs or whatever create much more powerful reactions, and they aren't spawning massive black holes all over the place. If one was created at all it would be microscopic and fizzle out within seconds, which is extremely unlikely in itself. So, no worries! However, I have wondered about some Dr. Strangelove-type scenario like this, that the world may not end with a bang or a whimper, but an "Oops!”

    Well, it certainly has a lot of complexities and implications. This is why some of the leaders of the computing and robotics industries and related disciplines are already talking about the possible consequences, and how to proceed responsibly. Heres’ an article on just that subject that caught my attention in the NY Times: http://www.nytimes.com/2009/07/26/scien ... .html?_r=1
    Heres’ another one from a little further back that also mentions the Singularity:
    http://www.nytimes.com/2009/05/24/we...24markoff.html
    Basically, as I see it, the rule of thumb should be “proceed, with caution.” We really don’t put nearly enough energy into threats to mankind, you would think it would draw more attention. These are called “existential risks”, AI could potentially be one, and one of the pioneers in that area is Oxford professor Nick Bostrom, who is absolutely brilliant, I’ll put up some of his stuff later. Another thing to consider is that AI would be philosophically and morally equivalent to a human being, restricting it’s access to data, or limiting it’s functions might be essentially slavery.

    Actually, I have a very different perspective. First we have to broaden the scope to advances in other technologies which will be parallel to this. There is exciting progress in using nanotechnology to better harness and store solar energy. The problem now is our solar panels are sort of clumsy devices, not to mention being inadequately deployed. However, if we could capture all the solar energy that falls on earth every day it would produce 10,000X the power we presently use. That’s four zeros, I know, it’s shocking. Now, barring some existential crisis, we will eventually possess near infinite clean power, combined with advances in robotics, this might actually end capitalism, as well as most monolithic structures. To paraphrase Ozymandias from Watchmen, all major power systems are built on scarcity, once resources approach infinity this arrangement becomes untenable. Money just might become obsolete. Already more and more of the worlds’ wealth is becoming digitized, hard currency is wasteful and inefficient. The next step I think would be a post-capitalist society, which I can only imagine.

    I disagree, but this is exactly the kind of thinking I was hoping to promote; taking stock of these emergent technologies, and applying secular humanist principles.
     
  5. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I agree. None of the opposition's arguments make much sense. Mostly that comes from the "quantum consciousness" camp, which is against the mainstream of neurology. To paraphrase a scientist whose name I forget, it can be done because it has been done, we are thinking machines. If you exclude superstitious mumbo-jumbo, there’s no reason the mechanism of consciousness could not be replicated. If we can build artificial hearts or dialysis machines, it would stand to reason an artificial brain is just a few steps up in that trajectory.

    I'm not sure it would care as long as it didn't disturb it’s power source.

    Fiction, science fiction especially, is a novel vehicle to objectively explore the human condition.

    A classic. I haven't finished it yet, but it's cool.

    I think an AI would be very different, the gap might be unbridgeable. Any morality might have to be programmed into it, as ours is essentially an evolved behavior mechanism which it would have no inherent need for. Love is also an evolved trait to facilitate reproduction. Such an entity very likely would not care to reproduce. Food for thought.
     
  6. Spider

    Spider Experienced Member Experienced member


    90

    1

    0

    Sep 3, 2009
     
    Full props for referencing watchmen. Some of thge greatest ideas and observations of the 20th century can be found in comics.

    So the ending capitalism by removing it's hold on resources thing opens up the idea of would a worldwide anarchic state be instantly plausible if resources became infinite and power became obsolete? I think this might possibly be the only feasible way that the greater population might come around to the idea.

    That said still highly suss of the robots not just going Irobot on our arses, who's to say they don't one day realize that the atmosphere is slowing dwn their processing power so decide to remove it, and with it all life. or some similar apocalyptic realization?
     
  7. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    Some more Transhumanism/Singularity stuff. As promised, heres' some material from Nick Bostrom, Oxford professor and one of the founders of the world Transhumanist Association.

    "Transhumanism-The Worlds' Most Dangerous Idea?", from Foreign Policy
    http://www.nickbostrom.com/papers/dangerous.html

    A recent interview in Time magazine.
    http://www.time.com/time/health/article ... 27,00.html

    A fascinating lecture on Transhumanism and Existential Risks.
    http://www.youtube.com/watch?v=Yd9cf_vLviI

    I also wanted to post a great presentation at UPENN by faculty member Dr. Susan Shneider on Transhumanism and Philosophy.
    http://www.youtube.com/watch?v=j7fN0xW8egc
     
  8. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I found this great introduction to Anarcho-Transhumanism on this website, which unfortunately looks like it was abandoned. (Heres' the link, because I don't want to be accused of plagarism:http://www.anarcho-transhumanism.com/)
    Anyhow, I just thought it was really clear, concise introduction. Here goes...

    "ANARCHO-TRANSHUMANISM:
    The Ultimate Synthesis

    Anarchism: The political theory that aims to create a society free of all forms of authority, particularly those involving domination and exploitation. Transhumanism: The cultural move- ment that affirms the desirability of fundamentally altering the human condition through applied science and technology.

    Transhumanism: The cultural move- ment that affirms the desirability of fundamentally altering the human condition through applied science and technology.

    Anarcho-Transhumanism stands for:

    Political Freedom: Against the tyranny of government.
    Economic Freedom: Against the tyranny of capitalism.
    Biological Freedom: Against the tyranny of genes.

    Anarcho-Transhumanism is not:
    Libertarian: It does not believe in free-market fantasies.
    Extropian: It does not believe in optimistic futurism
     
  9. Carcass

    Carcass Experienced Member Experienced member Forum Member


    143

    2

    0

    Oct 12, 2009
     
    You really think we're going to bioengineer a race of geniuses when we can't even manage a functional public school system? You're dreamin'. When I was 18 this would have really appealed to me.

    If bioengineering ever becomes that advanced, the first step is going to be turning all us workers into even more passive drones than TV and drugs have already made us. If robotics ever becomes that advanced, then there will be no need for the working class and we'll be wiped out. The interests of the people who control these technologies are not the same as the interests of the working class. More likely, powerful technologies that we do not control or develop will be used against us as they have been in the past. Personally, I'm not interested in some specialist "fixing" what he thinks is wrong with my brain; his vision of a techno-utopia probably differs drastically from mine. The chief problems I have with myself have to do with the ways in which I'm alienated from the people around me, the land base that sustains me and the labor I perform. These are not genetic defects, they are the results of conditioning and being on the losing side of the class war. I do not need nanobots rearranging by brain to fix this situation.
     
  10. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    That is just a series of gross oversimplifications. First of all, you have to draw a line between the problems of humanity, with the deficiencies of the monolithic institutions in which we live.

    Some of the most brilliant and respected scientists and philosophers are involved with or connected to the Singularity/Transhumanism and related fields. Raymond Kurzweil, Oxford professor Nick Bostrom, and even Stephen Hawking.

    You’re leaving out artificial intelligence, nanotechnology, quantum computing, etc.

    This is also the most dire, fatalistic idea. The Transhumanist movement is not extropian, nor am I. Such powerful technologies must be used with utmost care. Any action should be judged based on the amount of people it will affect, and how profoundly, so major changes should be undertaken with great care, that goes without saying. Transhumanists themselves are some of the loudest voices calling for caution and careful consideration.

    Also, if you say that such profound technologies could be constructed as to create such devastating effects, you must also concede that those technologies could be just as miraculous agents for liberating mankind and improving the quality of life.


    Actually, as I was saying, these technologies could essentially eliminate class structure, in total. When resources approach infinity these institutions, like government and corporations, cease to be viable.

    That’s’ simply wrong. Look at the internet, or cell phones. Even indoor plumbing. Some of the simplest technological innovations have had the biggest impact. Plumbing alone has probably saved millions or billions of lives. Again, technology is only inaccessible and unaffordable when it doesn’t work very well. Once it is perfected, it becomes omnipresent.

    One of the founding principles of Transhumanism is individual liberty. This is part of why it is compatible with Anarchism, they share common threads; namely secular humanism, and (Small “L”) libertarianism.

    If people don’t want enhancements, once they become available (Which would be very gradual.) they can decline them. We live alongside the Amish and modern Luddites. Actually, ironically, the Amish support bioengineering because the insular nature of their communities has led to inbreeding which causes a lot of rare genetic diseases.

    Technology could replace, or augment whatever labor you presently do. Like the internet and cell phones it could make you more connected to a larger group of people. Technology can help save the environment in a number of ways; by developing near-infinite clean power, by replacing dirty fuels, by genetically preserving endangered species, and by getting more of us off this planet. Improving you’re intelligence and cognition might not make you happier or improve you’re life, I find it hard to believe that, but that is you’re choice. This could also profoundly help everyone around you, also. As Bostrom has pointed out, if we were able to improve the cognitive functioning of all scientists 1% (A very achievable goal.) it would be like adding hundreds of thousands of new scientists. The cumulative effect would be awesome.
    As I’ve said, the human race will evolve, or become extinct. I’d rather the latter didn’t happen. If we want to survive, especially if we want to be the best equipped to handle emergent crisis and manage them, the key is through discovery. Ignorance has a pretty piss-poor track record, it’s not a good problem solver. Unlike the dinosaur, we can make the choice to survive or not.
     
  11. GFSM

    GFSM Experienced Member Experienced member


    92

    0

    1

    Oct 25, 2009
     
    i found this rather interesting. it's scottish author Iain M Banks talking about his series of "Culture" novels:

    A FEW NOTES ON THE CULTURE

    by Iain M Banks
    #
    Firstly, and most importantly: the Culture doesn't really exist.
    It's only a story. It only exists in my mind and the minds of the
    people who've read about it.
    #
    That having been made clear:
    #
    The Culture, in its history and its on-going form, is an expression
    of the idea that the nature of space itself determines the type of
    civilisations which will thrive there.
    The thought processes of a tribe, a clan, a country or
    a nation-state are essentially two-dimensional, and the nature
    of their power depends on the same flatness. Territory is all-important;
    resources, living-space, lines of communication; all are determined by
    the nature of the plane (that the plane is in fact a
    sphere is irrelevant here); that surface, and the fact the species
    concerned are bound to it during their evolution, determines the
    mind-set of a ground-living species. The mind-set of an aquatic
    or avian species is, of course, rather different.
    Essentially, the contention is that our currently dominant
    power systems cannot long survive in space; beyond a certain
    technological level a degree of anarchy is arguably inevitable
    and anyway preferable.
    To survive in space, ships/habitats must be self-sufficient,
    or very nearly so; the hold of the state (or the corporation) over
    them therefore becomes tenuous if the desires of the inhabitants
    conflict significantly with the requirements of the controlling body.
    On a planet, enclaves can be surrounded, besieged, attacked;
    the superior forces of a state or corporation - hereafter referred
    to as hegemonies - will tend to prevail. In space, a break-away
    movement will be far more difficult to control, especially if
    significant parts of it are based on ships or mobile habitats.
    The hostile nature of the vacuum and the technological complexity
    of life support mechanisms will make such systems vulnerable to
    outright attack, but that, of course, would risk the total
    destruction of the ship/habitat, so denying its future economic
    contribution to whatever entity was attempting to control it.
    Outright destruction of rebellious ships or habitats
    - pour encouragez les autres - of course remains an option for the
    controlling power, but all the usual rules of uprising
    realpolitik still apply, especially that concerning the peculiar
    dialectic of dissent which - simply stated - dictates that in all
    but the most dedicatedly repressive hegemonies, if in a sizable
    population there are one hundred rebels, all of whom are then
    rounded up and killed, the number of rebels present at the end
    of the day is not zero, and not even one hundred, but
    two hundred or three hundred or more; an equation based on
    human nature which seems often to baffle the military
    and political mind. Rebellion, then (once space-going
    and space-living become commonplace), becomes easier than
    it might be on the surface of a planet.
    Even so, this is certainly the most vulnerable point in the
    time-line of the Culture's existence, the point at which it is
    easiest to argue for things turning out quite differently, as the
    extent and sophistication of the hegemony's control mechanisms -
    and its ability and will to repress - battles against the
    ingenuity, skill, solidarity and bravery of the rebellious
    ships and habitats, and indeed the assumption here is that this
    point has been reached before and the hegemony has won...
    but it is also assumed that - for the reasons given above -
    that point is bound to come round again, and while the forces of
    repression need to win every time, the progressive elements need
    only triumph once.
    Concomitant with this is the argument that the nature of life
    in space - that vulnerability, as mentioned above - would mean that
    while ships and habitats might more easily become independent from
    each other and from their legally progenitative hegemonies, their crew -
    or inhabitants - would always be aware of their reliance on each other,
    and on the technology which allowed them to live in space. The theory
    here is that the property and social relations of long-term
    space-dwelling (especially over generations) would be of a
    fundamentally different type compared to the norm on a planet;
    the mutuality of dependence involved in an environment which
    is inherently hostile would necessitate an internal social
    coherence which would contrast with the external casualness
    typifying the relations between such ships/habitats.
    Succinctly; socialism within, anarchy without.
    This broad result is - in the long run - independent of the
    initial social and economic conditions which give rise to it.
    Let me state here a personal conviction that appears,
    right now, to be profoundly unfashionable; which is that a
    planned economy can be more productive - and more morally
    desirable - than one left to market forces.
    The market is a good example of evolution in action;
    the try-everything-and-see-what-works approach. This might
    provide a perfectly morally satisfactory resource-management
    system so long as there was absolutely no question of any
    sentient creature ever being treated purely as one of those
    resources. The market, for all its (profoundly inelegant)
    complexities, remains a crude and essentially blind system,
    and is - without the sort of drastic amendments liable to
    cripple the economic efficacy which is its greatest claimed
    asset - intrinsically incapable of distinguishing between simple
    non-use of matter resulting from processal superfluity and the
    acute, prolonged and wide-spread suffering of conscious beings.
    It is, arguably, in the elevation of this profoundly
    mechanistic (and in that sense perversely innocent) system to a
    position above all other moral, philosophical and political
    values and considerations that humankind displays most convincingly
    both its present intellectual [immaturity and]
    - through grossly pursued selfishness rather than the applied hatred
    of others - a kind of synthetic evil.
    Intelligence, which is capable of looking farther ahead
    than the next aggressive mutation, can set up long-term aims
    and work towards them; the same amount of raw invention that
    bursts in all directions from the market can be - to some
    degree - channelled and directed, so that while the market
    merely shines (and the feudal gutters), the planned lases,
    reaching out coherently and efficiently towards agreed-on
    goals. What is vital for such a scheme, however, and what
    was always missing in the planned economies of our world's
    experience, is the continual, intimate and decisive
    participation of the mass of the citizenry in determining
    these goals, and designing as well as implementing the
    plans which should lead towards them.
    Of course, there is a place for serendipity and chance
    in any sensibly envisaged plan, and the degree to which this
    would affect the higher functions of a democratically
    designed economy would be one of the most important
    parameters to be set... but just as the information we have
    stored in our libraries and institutions has undeniably outgrown
    (if not outweighed) that resident in our genes, and just as we
    may, within a century of the invention of electronics, duplicate
    - through machine sentience - a process which evolution took
    billions of years to achieve, so we shall one day abandon the
    grossly targeted vagaries of the market for the precision
    creation of the planned economy.
    The Culture, of course, has gone beyond even that, to
    an economy so much a part of society it is hardly worthy of a
    separate definition, and which is limited only by imagination,
    philosophy (and manners), and the idea of minimally wasteful
    elegance; a kind of galactic ecological awareness allied to a
    desire to create beauty and goodness.
    Whatever; in the end practice (as ever) will outshine theory.
     
  12. ASA

    ASA Experienced Member Experienced member Forum Member


    888

    0

    0

    Nov 2, 2009
     
    Ideas are free unless they are to manipulate to control and i am watching, either which way I'm interested in change but not fascinated so its a personal thing for the time being not a bias perse, people are upskilling i've noticed which is noice but i also believe that they are unfocussed and playing the fiddle, cry me a river PUNK! and yes i have empathy unlike some seemmingly

    i keep saying you have a simple mandate with an explanation of words as words can be tricky beasts, no govt, organisation, no bias, equality, diversity....

    people get stuck on even the basics so judge others, don't eat that, don't say that um excuse me explain and i will endeavour to work with you or its not free, oh your a communist fine we will have to agree to disagree and aim for the middle literally wihtout being wishy washy, peepz appear to do what they want as they have been cajoled to do since they went to school, it takes nouse that anybody can have of course to realise and do something with it, FREEeeee

    Govts have shown not to work, its like the nazis won the war and wat camp, Serious Change in a New 'Millenium!' rant
     
  13. ASA

    ASA Experienced Member Experienced member Forum Member


    888

    0

    0

    Nov 2, 2009
     
    and scientists act with impunity, noh gosh they didn't know that they were building a bomb how sad, if they want to be god 'they will have to wear the cross, ' as the only god is 'freedom', anarchy.
     
  14. BlinkoChrist

    BlinkoChrist Experienced Member Experienced member Forum Member


    158

    1

    0

    Nov 1, 2009
     
    I don't want to be "genetically altered" That's just weird im sorry.
     
  15. ASA

    ASA Experienced Member Experienced member Forum Member


    888

    0

    0

    Nov 2, 2009
     
    whats normal
     
  16. Carcass

    Carcass Experienced Member Experienced member Forum Member


    143

    2

    0

    Oct 12, 2009
     
    So what do we think, did NGNM85 leave because he realized that no one here gives a fuck about/believes in the singularity? Or did he finally figure out that being condescending towards and dismissive of absolutely every worldview save your own makes for poor conversation?

    [​IMG]

    (From http://www.picturesforsadchildren.com/)
     
  17. ASA

    ASA Experienced Member Experienced member Forum Member


    888

    0

    0

    Nov 2, 2009
     
    condascending and this is a place for ideas about... not to dismiss ours at every turn.
     
  18. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    No such luck...

    The "singularity" is just a buzzword, I didn't choose it. Just like "left-wing", or "radical", or whatever, a broad based term to facilitate communication. We can debate the merits of it, I think it's as good as anything else.

    As for the popular disinterest, I'd have to say that's evident, and unfortunate. In hindsight maybe if I'd chosen a better title like "Anarchism and Technology" or "Libertarian Socialism and Scientific Progress" or something like that it might've generated a tad more interest. It's really too bad. The world is changing and we need to be aware of these changes. Like it or not, this is going to change all our lives, you can bury your head in the sand, or you can deal with it.

    We also need to look forward. Sure, there are some universal principles that still hold true today, but Kropotkin and Bakunin couldn't have even conceived of the world we presently inhabit. I think it's accepted that one of the unique features of Anarchism as a philosophy is that it is not a static structure, rather it is dynamic, changing to meet the needs of the time.

    Also, genetic engineering, artificial intelligence, cloning, and transhumanism involve deep and complex ethical and philosophical issues. I think there is room for discussion as to how they relate to Anarchism. Or, more broadly;the liberatory aspects of technological progress.

    I'm really surprised to hear you say that, it's too bad, you articulated yourself pretty well before. As for condescension, "those who live in glass houses.." I don't think I've been excessively harsh on anybody, if I've reacted strongly it's to something that is just total lunacy, like whomever it was who was praising the Ft. Hood murders, or someone advocating primativism, which is absolutely insane. Really, if you have any legitimate issues I'm more than willing, I think I've shown an incredible amount of patience, and I try to avoid ad hominem bullshit and personal attacks. So, if you' or anybody has an interesting interpretation then that's cool but if "Fuck you" is the pinnacle of intellectual debate there's really no point, that's not interesting or intelligent.

    Since you posted the cartoon it's implied you endorse the content; I'd like to unpack some of that.

    "In the next thirty years Computers will change humanity in ways incomprehensible to us now."

    Absolutely. This is a very basic truisim. Go back thirty years to 1979, computing HAS radically changed our world. Barring an existential disaster we can essentially count on at least equal change in the next thirty years. For very basic logical reasons the change between now and 2039 will be considerably greater than that between 1979 and today.

    "A third of the world is without electricity"

    I haven't done any major research but I'll concede that's true. However, that statement has little value in this context. The implication is really bogus.

    "Flying car bullshit"

    Well, the flying car thing goes way back so go blame some science fiction writer from half a century ago. Actually the main reason why we don't have flying cars has to do with superconductors. Now, the ideas involved with this are amazing bordering on fantastical, but thats' the world we live in. We can grow an ear on the back of a mouse, we can send a probe to mars, and pretty soon human cloning will be possible, although ethically dubious to say the least. Five hundred years ago this would be thought of as magic, today it's real, it's science.

    I don't know what "spiritual significance" means. Personally I dislike the word, as I've said before. I think it's really a nonsense word that gets bounced around primarily by fuzzy-minded people. Virtually all of the philosophers and scientists involved with what we've agreed to call "the singulkarity" or "transhumanism" are complete atheists. The key distinction is religion is founded on a different kind of "faith". Religious "faith" essentially translates into complete certitude in fantastical myths that have no basis in reality. I call that "gullibility" or "stupidity", but they want to call it "faith." Ok. Genetic engineering, quantum computing, nanotechnology, etc., are all very real things that actually exist today. Are there some people who promote fantastical ideas? Very likely. However, nothing that I've said or linked to could really be classified in that way. This is really baseless. I mean, Kurzweil sometimes gets a little carried away, but he has am amazing track record. Nobody bats 1000.

    Also comparing technological progress to the rapture is really disingenuous. Emeregent technologies could radically improve our lives, or it could dio really substantial harm. Even Kurzweil admits that, Nick Bostrom, who I've quoted repeatedly has written a lot about the destructive potential of emergent technologies, Bill Joy as well, although I think he tends toward being overly pessimistic. There is no garuntee. However, if you are obviously so ready to acknowledge the destructive potential of technology, you obligate yourself to admit that it has equally profound potential benefits. At the end of the day I say "drink deep", because one of the unique properties of science is while it can be used to create problems, it also helps solve them.

    Is the future of the human race just a meaningless distraction? I don't think so. I think it's immensely important. Again, I'll paraphrase Bostrom, there are four big possibilities for the human race, theres' no other way to see it;
    1.Extinction
    2.Plateau
    3.Recurring Collapse
    4.Posthumanity
    Therefore, it's a very ligitimate and serious issue.
     
  19. sociopop82

    sociopop82 Experienced Member Experienced member


    95

    0

    4

    Sep 3, 2009
     
    In comparison to giga-humans...
    Look who's expendable!
     
  20. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I thought this was a really great article outlining the ethics of Transhumanism, by Eliezer Yudkowsky, one of the leaders in the field of Artificial Intelligence.

    Transhumanism as Simplified Humanism
    By Eliezer Yudkowsky
    Frank Sulloway once said: “Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we don’t give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.”

    Suppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?

    Oh, and by the way: This is not a trick question.

    I answer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isn’t always the best choice, but sometimes it is.

    I won’t be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming “What does two plus two equal? Four!” you will not gain a reputation as a deep thinker. But it is still the correct answer.

    If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?

    The important thing to remember, which I think all too many people forget, is that it is not a trick question.

    Transhumanism is simpler – requires fewer bits to specify – because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what exact age does the term in the utility function go from positive to negative? Why?

    As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.

    You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.

    Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?

    Well, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.

    But – you ask – where does it end? It may seem well and good to talk about extending life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time – the equivalent of IQ must go to 140, or 180, or beyond human ranges?

    Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.

    Ultimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable if it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.

    So that is “transhumanism” – loving life without special exceptions and without upper bound.

    Can transhumanism really be that simple? Doesn’t that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.

    Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.

    But a moral philosophy should not have special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them – morality doesn’t always have to be complicated.

    There is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If it’s possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet – is life a bad thing?

    Could the moral question really be just that simple?

    Yes.
     
Loading...