A Response to Steven Pinker on AI

Share
Embed
  • Published on Mar 31, 2019
  • Steven Pinker wrote an article on AI for Popular Science Magazine, which I have some issues with.
    The article: www.popsci.com/robot-uprising-enlightenment-now
    Related:
    "The Orthogonality Thesis, Intelligence, and Stupidity" (usclip.net/video/hEUO6pjwFOo/video.html)
    "AI? Just Sandbox it... - Computerphile" (usclip.net/video/i8r_yShOixM/video.html)
    "Experts' Predictions about the Future of AI" (usclip.net/video/HOJ1NVtlnyQ/video.html)
    "Why Would AI Want to do Bad Things? Instrumental Convergence" (usclip.net/video/ZeecOKBus3Q/video.html)
    With thanks to my excellent Patreon supporters:
    www.patreon.com/robertskmiles
    Jason Hise
    Jordan Medina
    Scott Worley
    JJ Hepboin
    Pedro A Ortega
    Said Polat
    Chris Canal
    Nicholas Kees Dupuis
    James
    Richárd Nagyfi
    Phil Moyer
    Shevis Johnson
    Alec Johnson
    Lupuleasa Ionuț
    Clemens Arbesser
    Bryce Daifuku
    Allen Faure
    Simon Strandgaard
    Jonatan R
    Michael Greve
    The Guru Of Vision
    Julius Brash
    Tom O'Connor
    Erik de Bruijn
    Robin Green
    Laura Olds
    Jon Halliday
    Paul Hobbs
    Jeroen De Dauw
    Tim Neilson
    Eric Scammell
    Igor Keller
    Ben Glanton
    Robert Sokolowski
    anul kumar sinha
    Jérôme Frossard
    Sean Gibat
    Volotat
    andrew Russell
    Cooper Lawton
    Gladamas
    Sylvain Chevalier
    DGJono
    robertvanduursen
    Dmitri Afanasjev
    Brian Sandberg
    Marcel Ward
    Andrew Weir
    Ben Archer
    Scott McCarthy
    Kabs
    Tendayi Mawushe
    Jannik Olbrich
    Anne Kohlbrenner
    Jussi Männistö
    Mr Fantastic
    Wr4thon
    Archy de Berker
    Marc Pauly
    Joshua Pratt
    Andy Kobre
    Brian Gillespie
    Martin Wind
    Peggy Youell
    Poker Chen
    Kees
    Darko Sperac
    Truls
    Paul Moffat
    Anders Öhrt
    Marco Tiraboschi
    Michael Kuhinica
    Fraser Cain
    Robin Scharf
    Oren Milman
    John Rees
    Seth Brothwell
    Brian Goodrich
    Clark Mitchell
    Kasper Schnack
    Michael Hunter
    Klemen Slavic
    Patrick Henderson
    Long Nguyen
    Oct todo22
    Melisa Kostrzewski
    Hendrik
    Daniel Munter
    Graham Henry
    Duncan Orr
    Andrew Walker
    Bryan Egan

    www.patreon.com/robertskmiles
  • Science & TechnologyScience & Technology

Comments • 1 125

  • Jack Frosterton
    Jack Frosterton 3 hours ago

    sudo lol xD

  • Juicy_Shitposts
    Juicy_Shitposts 2 days ago

    Pinker was on Epstein's plane, just sayin'.

  • valar
    valar Month ago

    Thanks man. Pinker's rather obviously flawed views on AI really annoy me.

  • monu2619993
    monu2619993 Month ago +1

    Noam chomsky looked like an absolute lunatic discussing the lack of threat of AI. If it wasn't for his work in linguistics, I would think he was an absolute intellectual dimwit. And let us not discount the fact that he is a massive Islam apologist. One of the most hateful, bigoted and intolerant ideologies prevalent today.

  • Matthew Greer
    Matthew Greer Month ago

    I'm beginning to really enjoy your videos but I'd like to offer a suggestion. As someone new to your channel the multiple references to previous videos although really useful kind of discourage me from watching your videos because I don't have the necessary back story. I know this isn't fully founded because you do give a summary for people who didn't watch that video, perhaps you could refer to your other videos at the end. Maybe this isn't useful feedback at all but it could be at least something worth thinking about.

  • Chandler Coates
    Chandler Coates 2 months ago

    man if python had understood that I wouldn't have a job

  • mark heyne
    mark heyne 2 months ago

    Aside from the issue of whether AI might have malevolent intentions [ if AI can have intent or self-determined aims] it might misinterpret instructions, especially if the programming is ambiguous or faulty. Then, there are bad human actors who could program AI for nefarious purposes, the program having no criteria [ ethical rules ] to disobey or ignore them.

  • What'a'nerd
    What'a'nerd 2 months ago +1

    8:25
    "Intelligence in one domain does not automatically transfer to other domains"
    I'm pretty sure it does, however, 'knowledge' in one domain does not necessarily transfer very well to other domains.
    Confusing knowledge with intelligence is a dangerous mistake.
    'Hard drives can have ridiculous amounts of knowledge, but i would never consider them intelligent.'
    'A genius is a genius, no matter the subject matter.'

  • Zetapology
    Zetapology 2 months ago

    I thought he was peter capaldi

  • efenty FNT
    efenty FNT 2 months ago

    society is on the wrong path you golem

  • Sir Zorg
    Sir Zorg 2 months ago

    "I hate this damn machine
    I wish that they would sell it.
    It won't do I want,
    only what I tell it" - The Programmer's Lament (I couldn't find the origin)

  • Patrick Davison
    Patrick Davison 2 months ago

    Nice use of the song Yoshimi :)

  • Nexus Clarum
    Nexus Clarum 2 months ago +1

    I'm pretty sure depression is predominantly first-world problem. The data seems to suggest adversity is a healthy and necessary part of life for human beings. Once you take that away from human beings they seem to be like mice in the mouse utopia experiment and start selecting themselves into extinction. Look at the birth rates in places like Norway, Sweden, Germany etc. Countries where you have all the ability in the world to be prosperous but you CHOOSE not to be. Instead you'd rather dedicate your life to an aimless pursuit of self-pleasure and degeneracy. No. You need more adversity and suffering and struggle in your life. It's coming.

  • Gummiel
    Gummiel 2 months ago

    Also if I know nothing at all like Pinker seem to suggest some times, I much rather assume it will be dangerous than helpfull. I mean I wont get a 2nd chance if it happens to kill me. So even if there is 95% chance it will help me and 5% chance it will kill me I will not finish it before I can turn that to 100%/0%

  • quasimod
    quasimod 2 months ago

    I hope the experiment with "low effort content" convinces you to continue making the "high effort content" that I come here for. I will not watch anyone ramble for 45 minutes.

  • MotoCat
    MotoCat 2 months ago

    I love watching your videos and learning about the philosophical consequences of AI

    But most of all, that floofy hair at the end. It is glorious. Wear it with pride and make everyone jealous! Mainly me though, I wish I had hair that glorious

  • blivvy
    blivvy 2 months ago

    TIL Steven Pinker is an agent of our apparently quite recent but probably not entirely new secret AI overlords...

  • tomhrio
    tomhrio 2 months ago

    Its almost like Pinker is bankrolled by a pro-automation thinktank or somethin

  • Richard Meunster
    Richard Meunster 2 months ago

    AI wont come as a terminator take over. It'll strike in small ways. Things like the chinese citizen score and google algorithms.

  • julianb188
    julianb188 2 months ago

    On the idea that things have gotten better over time: they have, but only because we were always groaning about what was wrong with the world and fixing it.

  • hoggif
    hoggif 2 months ago

    Ethics and moral is problem with people too. When you have a goal like get more money, people often do stuff that is not ethical (or legal). That is not limited to AI.

  • Deplorable Mecoptera
    Deplorable Mecoptera 2 months ago

    Make a paper clip minimizer
    Make two paper clip minimizers

  • onshore1ft
    onshore1ft 2 months ago

    Your channel is fantastic and your points are valid. Sadly I think you expect too much from what is just journalism not academia.

  • andarted
    andarted 2 months ago

    'No ONS, no F+, and don't send me dick pics' - by every second woman, published on Tinder
    'Yes, We Are Worried About the Existential Risk of Artificial Intelligence' - by Allan Dafoe and Stuart Russell published on MIT Technology Review
    Such statements imply a previously experienced painful chain of experiences. I feel you guys.

  • Num nut
    Num nut 2 months ago

    I noticed what you did with the song at the end. Nice.

  • starofcctv94
    starofcctv94 2 months ago +9

    Ah the realisation process of many people regarding Steven Pinker.
    1) Respect Pinker well researched content
    2) Pinker writes an article or book on your personal area of expertise
    3) Realise Pinker is pure neoliberal ideology distilled into a man with a bad haircut.

  • Tsunami! :o
    Tsunami! :o 2 months ago

    1:25 the side of MY people is going down too. i don't care if there are more people, if my genes get replaced, this is a big enough one to fuck everything up to victory. Do you want you and your entire population, each and every individuals who look like you, not to reproduce, not to interbreed with others, but to simply die?
    We create a beautiful world to live in, not to disapear without harvesting the fruit of our labor.

    • DeoMachina
      DeoMachina 2 months ago +1

      99.999% of your genes are getting replicated, and you're sitting here whining that it's not 100%. Guess what moron? Mutation means that it can NEVER be 100%. That's ignoring the fact that your "people" don't seem to be going anywhere, seems like there's more of them than ever before.

      Also you realise your subscription list is public yeah? Sure are a lot of white supremacists you're interested in. Extremists who want to eliminate other types of people don't get to complain about feeling threatened. YOU are the threat here.

  • David Messer
    David Messer 2 months ago

    It's amazing to me that so many smart people don't have a clue about why ASI may be dangerous. They say; just design it to be safe! But they whole thing about a super-intelligence is that IT WILL BE SMARTER THAN US. If you have something that is smarter than us, then, BY DEFINITION, it will be unpredictable in what it does. We can't even predict what human beings will do, much less a super-intelligence.
    It's possible that it will choose to not harm us, but even in that case, will we like it? Let's say that it loves us and wants us to be safe and happy. Do you want to be treated as a pet?
    And, that's probably the best case scenario.
    There is an old book by James P. Hogan called "The Two Faces of Tomorrow" that shows what could happen, even if we take every precaution when designing an AGI. (I recommend reading it.) This show how long computer scientists have been thinking about this problem. I haven't seen any solutions that come close to being safe, IMO.

  • Grixli Panda
    Grixli Panda 2 months ago

    2:48 This reminds me of arguments made against so-called "conspiracy theorists" based on the idea that they just can't handle the idea of chaos pyschologically and so have to invent some shadowy organisation that is responsible for this or that particular ill in the world. Btw, Steven Pinker flew on Jeffrey Epstein's Lolita Express. I called it a number of years ago that Pinker was a potential child rapist, but I see that this video came out before the evidence became general knowledge.

  • ihavegotnoidea
    ihavegotnoidea 2 months ago

    btw suicide is up and depression is up.

  • Gwenyth Wynne
    Gwenyth Wynne 2 months ago +1

    Pinker has a long, long history of cherry picking data to make his confirmation bias ridden theories seem more viable than they are. He's been outed as a fraud over and over. The fact that you trust him to begin with suggests that this is the first time you've read a piece of his on a specialty of yours.
    The feeling you're feeling is how anthropologists and actual evolutionary psychologist researchers(rather than armchair philosophers) have felt about his post hoc theorizing for years now.

    • Computational Trinitarianism
      Computational Trinitarianism 2 months ago

      "Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know."

  • Eric Awful
    Eric Awful 2 months ago

    Good on ya for doing this video.

  • Kirumy
    Kirumy 2 months ago +1

    This is an ethical literature review, in video form. I'm very impressed, I wish I could be so knowledgeable and creative.

  • xcvsdxvsx
    xcvsdxvsx 2 months ago

    It's far more likely that this article isn't uncharacteristically bad, that they all are this bad, this just happens to be the field you understand. If you understood the things he wrote on other fields you would find those just as uncharacteristically bad as this. With the exception of Steven Pinkers actual own field of course.

  • Patrick Demko
    Patrick Demko 2 months ago

    Another thing is, it's very possible humans could be smart enough to make an extremely powerful A.I., and smart enough to make sure it's safe before running it, but the only way to do that is to be very worried and thorough with making sure the A.I. is safe, which is what he's saying not to do.

  • Christopher Gibbons
    Christopher Gibbons 2 months ago

    You have to understand, this article is not written for you and I. This is written for the layman. Not the generally scientifically literate, the actual random who thinks that a theory is a hypothesis. They need to be told that there is nothing to worry about, because, despite their ignorance, they still get a voice in distribution of grant money.
    To paraphrase Scot Adams: there is nothing more dangerous than a manager with a magazine article.

  • Jared Jacobsen
    Jared Jacobsen 2 months ago

    An AI smart enough to carry out commands like 'go make some paperclips' would surely require some level of general intelligence, and for all we know that level of general intelligence could be sufficient for it to understand what humans actually want. His refutation of this was merely the statement that you can be smart in some ways and dumb in others.

  • Severin Schmid
    Severin Schmid 2 months ago

    First off, your arguments against the points made in Pinker's article seem very reasonable to me. However, I do not find it surprising that you found the article to be lackluster. Pinker probably put as much research into it as he does with every other article he writes. It also very likely was not some strange, domain specific dud of his, just because he is just not very knowledgable about the subject of robot safety. Articles of this kind are always received badly by experts in the field.

    In a strange way, the problem really lies with you:
    * Firstly, you know a lot about this subject, as it is what you work on and think about everyday. Obviously your opinion on it will always be more nuanced and "complete" than Pinkers.
    * Secondly, you already have formed your own opinions about the subject, as opposed to previous articles of Pinker you have read. These opinions are based on information you have gathered over time, papers you have read, conversations you have had. However, one needs to recognize that even if Pinker had had the same "data set" to draw from, he might still write this article exactly as it is. He might find some things that had a great impact on your thinking about AI safety to be irrelevant and vice versa. So even if he'd done his research, you might still have to make this video.
    * Thirdly, you have a vested interest in the field of AI safety to continue to exist and be considered important. In a way, even if his arguments were more convincing to you, you can't really let yourself be convinced by them. Thus, when you say, you don't want AI safety to become a casualty in the fight for optimism, this sounds a bit strange to me. Does Pinker really call for the eradication of AI safety research? Or is its existence exactly what makes Pinker so optimistic about the future of AI? Since, as you yourself say, science is an important part of why things are getting better.

    I hope this didn't read like a personal attack, it really isn't intended to be one. You're videos are very well done and fascinating. Keep up the great work!

  • Joel Abraham
    Joel Abraham 3 months ago

    saying 'right' at the end of every sentence isn't what makes things right

  • Horia Cristescu
    Horia Cristescu 3 months ago

    Robert, when you're saying 'we can put cars on rockets and fly them to the moon' the key is 'we' and not 'I' or 'you'. A single human intelligence can't do that. It's through accumulation of knowledge and technology that we can. We're not that general of an intelligence. Flying to the moon falls under 'other hobbies and pursuits'.

  • Dr Astuce
    Dr Astuce 3 months ago

    good quality and thought provoking, i like it !

  • Sean Spartan
    Sean Spartan 3 months ago +1

    Very fair assessment

  • Kurt Angerdinger
    Kurt Angerdinger 3 months ago

    Wrong, wrong and wrong. Welfare is going down, jobs are going down, the aerage income minus inflation is going down, hoemlessness is going up, etc . So sorry to destroy your dream, but this society is the worst since 1920s/1930s.

  • Orbital Vagabond
    Orbital Vagabond 3 months ago +4

    If humans are smart enough to design [a nuclear reactor], then they must also be smart enough to do so safely.
    *laughs in Chernobyl*

  • Jop Mens
    Jop Mens 4 months ago

    I think if you would hypothetically raise a human child in an alien family, you would see what general intelligence means and that the child would probably learn to think in alien ways, mostly limited by its senses. The point is that if you don't learn and develop in all possible ways, that doesn't mean that there isn't the *potential* to do all that.

  • Woodworking Fangirl
    Woodworking Fangirl 4 months ago

    One word: "Eppstein".
    And now look at that testimonial on Pinker's enlightenment book: Bill Gates!
    Bill Gates? One word: "Eppstein".

  • Brandon Sergent
    Brandon Sergent 4 months ago

    ~12:25 I don't like that assumption that it's easier to make an AGI unsafe just because you can imagine simpler versions of toxic instruction. I assert there is an equal number of equally simple beneficial models that don't rise to the level of AGI obeying our commands. If anything I assume the vast majority of all possible configurations are best labeled "utterly inert/useless" rather than either harmful or beneficial. After all the only difference between trash compacting and stomping your foot is target choice hehe. Selecting places to stomp randomly will give you I would assume mostly useless stomps, a few bad stomps and a few good stomps. See what I'm saying?

    • Robert Miles
      Robert Miles  4 months ago +1

      The world is not neutral, we've spent a lot of time and effort on optimising the world to be good for humans. You'd get mostly neutral stomps, a large number of bad ones and a tiny number of good ones. Because most things that humans care about, we've already put some work into setting them up how we like them. Stomping on a randomly chosen artefact will almost never be good

  • EmoDuck13
    EmoDuck13 4 months ago +1

    Never has a USclipr hurt me so much as Robert Miles' disappointment in me for not reading the article in the description....

  • Dillon MacEwan
    Dillon MacEwan 4 months ago

    Pinker is consistently making sweeping claims and misquoting or misrepresenting other scholars to fit his positivist proselytizing - the case with Russel you mentioned in the video is a good example. Worse, he is too arrogant to own his mistakes and shift his position even when people like Russel pull him up about it.
    Pinker is in the business of telling comforting bedtime stories for centrists and free market capitalists

  • Echo Tear
    Echo Tear 4 months ago

    Pinker is goal driven.

  • Lance Winslow
    Lance Winslow 4 months ago

    Yes, Pinker often makes intellectual mistakes. I've questioned some of his stuff too, but overall, he's an interesting character.

  • threeMetreJim
    threeMetreJim 4 months ago

    The level of intelligence of an organism or entity, directly influences it's danger to those at levels below. As humans are top of the pile currently, we pose a danger to everything else, and that's pretty easy to see (pretty much anything endangered is endangered due to some form of human activity). If something surpasses humans in intelligence, it's pretty easy to see that whatever it is, would likely pose at least some sort of danger to those below, which would include humans.

  • deepdata1
    deepdata1 4 months ago

    I'm totally with you on all that you said in this Video.
    BUT: Here's the thing that Pinker was probably addressing: People (and I don't mean AI researchers, I mean laypersons) are currently unreasonably afraid of AI systems. Not only is this a problem when it comes to funding for AI research or future legislation but people really should be worried about other things right now. I am an AI researcher (although I'm not that far into my career) and what I'm loosing sleep over is, that we won't be able to see even a very basic AGI system before we're all killed by climate change.

  • Reidar Wasenius
    Reidar Wasenius 4 months ago

    Very well said!!

  • Philip O'Carroll
    Philip O'Carroll 5 months ago

    Problem with Pinker is that he is political, he's left science and objectivity behind.

  • Calum Carlyle
    Calum Carlyle 5 months ago

    I hope you realise that your appeal to statistics at the beginning of the video to try to show that things are getting better amounts to an attempt to deny climate change. Most of us who believe the world is going to hell in a handcart feel that it is because of climate change. Statistics about crime or war are irrelevant in this field, while statistics about climate change are far more apt. Feel free to check those statistics for yourself.
    The truth is that the robot uprising is not a threat because we will have wiped put our own species long before we can develop AI.

  • Binu Jasim
    Binu Jasim 5 months ago

    Spot a contradiction at 12:22 . "It is much easier to build an unsafe AGI than a safe AGI". How can it be even AGI if it can't reliably understand what a human meant? If it is that incapable, I guess it will be impossible for it to figure out complex ways of wreaking havoc.

    • Rowan Evans
      Rowan Evans 5 months ago +2

      Of course it understands what the humans meant, but the AGI won't just decide to do what we meant to program it to do instead of what we actually programmed it to do; that would go against its programming.

  • Skillus Eclasius II
    Skillus Eclasius II 5 months ago

    As long as the experts are scared of artificial intelligence, I see now reason to be afraid of it myself.

  • Robert Galletta
    Robert Galletta 5 months ago

    POLLUTION WILL KILL US BEFORE A I DOES

  • Robert Galletta
    Robert Galletta 5 months ago

    MY PHI;PHILOSOPHER IS ALLEN WATTS