Published on March 12 2018

Will Artificial Intelligences Be Good Guys or Bad Guys?

An interview with John Hunt, MD, Durk Pearson, and Sandy Shaw

Justin’s note: Today, we have a special interview for you. In it, Dr. John Hunt interviews Durk Pearson and Sandy Shaw about artificial intelligence (AI). If you’ve been reading the Dispatch, you know John is a doctor, inventor, and entrepreneur. He’s also Doug Casey’s co-author.

Durk Pearson and Sandy Shaw are no slouches either. Durk triple majored and triple minored at MIT. He worked as a rocket scientist and aerospace physicist for many years, and helped men get to and from the moon alive. His intelligence and knowledge base is one in a billion, which sits atop a sound economic ideology. He has been recognized as "an American Renaissance Man of Science,” and his notable achievements have extended to society in general.

Sandy graduated from UCLA, majoring in chemistry and biology and minoring in math. She’s been extensively interviewed by the mainstream media, including The Wall Street Journal. Her intelligence and knowledge base is phenomenal, which made her the ideal partner for Durk. Together, they co-authored the No. 1 New York Times best-seller Life Extension: A Practical Scientific Approach.


John: What is the difference between artificial intelligence (AI) and artificial general intelligence (AGI)?

Durk: Artificial intelligence can involve something very specific like a chess-playing program. If you ask a chess-playing program to play Go, you will find it to be useless. If you ask it to diagnose your symptoms, it's useless. AGI, or deep learning, is all about the machine programming itself. Recently, Google demonstrated an AGI computer that could learn chess to the grandmaster level in less than 24 hours.

It did this by watching grandmasters play chess. Then they took the same machine and exposed it to experts playing the game Go, and it became the world champion Go player in less than 24 hours. Go is much more complicated than chess. You see, the computer learned the games by observing humans playing them. This tech is from Tesla Learning (not the auto company), using tensor learning. The computers cost about $10,000 apiece. You don't need a supercomputer to do this. And the price of these learning machines will come down over time.

Recommended Link

America’s Go-To Crypto Expert: “Get Ready for the ‘Second Boom'”

Teeka Tiwari was one of the first experts to predict last year’s cryptocurrency boom. Anybody who followed his advice could have pocketed rare and extraordinary gains of 207%, 1,140%, 11,004%, and even 14,354% in as little as 6 months.

However, Teeka believes those who get in now – before an imminent deadline – could make even more money this year. Click here for all the details.

John: Can you explain more about deep learning?

Durk: Deep learning is when the machine teaches itself through observations without a human explicitly writing a program to do it. You give it examples and information, and the machine becomes an expert. It's the way humans learn.

John: AGI is going to be much more intelligent than humans in the near future.

Durk: The Kurzweil singularity will come soon (when the computers’ IQs are over 100). And I intend to have a singular AGI partner to help me function. 10 years later, their IQs will be 1000.

Sandy: I'm thinking that these incredibly intelligent computers won't be regulated as devices at that point, but rather, will need to be thought of as individuals.

Durk: There's going to be a “Computer Liberation Front,” a social movement that will have nothing to do with liberating computers, and be all about politicians trying to grab power using that as an excuse. But 10 years further on, their IQs will be 10,000, and what the hell does that mean? (AGI with 10,000 IQ says: “Durk and Sandy, I’ve figured out how to create universes; how would you like a universe of your very own with hundreds of billions of galaxies, each with hundreds of billions of planets?”)

John: A bit more imminently, what will be the political, economic, and legal implications of an autonomous driving truck?

Sandy: They could become a horrible terrorist weapon.

Durk: Whether autonomous or human-driven, driving a truck through a crowd kills a lot more people than a semi-automatic rifle with a bump stock. A hacked autonomous driving truck could be a major danger. I'm really concerned about security. White-hat hackers have demonstrated how they can take over the controls of a Jeep remotely and drive it off the road (this vulnerability has since been patched). The typical modern car has a computer with millions of lines of code. Higher-end cars have more lines of code in them than were involved in the entire Apollo flight program. There are going to be bugs in vehicle software that makes it a big target surface for black-hat hackers.

John: So what happens when one of these autonomous trucks, even without a hacker involved, goes awry? What are the implications?

Durk: Lots of lawsuits up and down the supply chain, assuredly. The insurance companies will still be happy to write policies because these episodes will be rare, and it will become clear how much safer the autonomous vehicles are than human-driven vehicles. Long ago, insurance companies set up Underwriters Laboratories (UL) to privately test all sorts of things, from safes to electrical switches. No one is compelled to use UL testing, but the insurers insisted on the testing before providing insurance policies.

Recommended Link

Unconventional “money ball” technique lets him retire at 42… and coach Little League in his free time
Thanks to an  “unconventional” technique, this California man was able to retire 2 decades early. He uses this rare “key” to generate thousands per week… without touching stocks or bonds. He calls it his retirement hobby. Click here to get the full story.

Sandy: People using electrically powered devices rarely consider the risks—including that of death by electrocution—that might occur without adequate engineering. But Underwriters Laboratory does. The average home has hundreds of electrical devices that could start a fire or cause electrocution, but this is extremely rare thanks to UL.

John: Given the average American's willingness to turn to the government to solve problems these days, I'd bet that the government will create a new agency to improve safety of AI-driven vehicles. It'll be as counterproductive as the FDA is at improving medical safety.

Durk: I certainly trust the insurance companies over government regulators.

John: Will our biological understanding keep up with AGI development sufficiently to allow a human to bond with an AGI computer by tying it into our brains? Or is it more likely that the AGI will stay separate from us, and leap past humans so we risk ending up with a Terminator situation?

Durk: An AGI robotic lover might cement that bond right down to the basement of the human limbic system. I worry most about black-hat hackers causing distress in that regard by hijacking AGI. Think about how hackers stole 21 million records from the government's Office of Personnel Management. These were the SF-85 and SF-86 forms, extensive forms filled out by people for background investigations, such as when applying for secret and top-secret clearance. All of it was stolen. Yet there has not been massive identify theft from this data breach. Why? Well, maybe it's the Chinese government that stole them. They may have all this compromising information on all sorts of people in sensitive government positions that would come in real handy to them, alongside the gigabytes of information that people's home security cameras send back to China every day. Only one customer in a thousand knows how to secure those things. They're collecting kompromat (Russian for compromising material).

John: Isaac Asimov's three laws of robotics were designed to protect humans from AGI robots. In his fiction, these protections were somehow placed into the core of the positronic brain so effectively that a robot would terminate itself before hurting a human. Is there a way to insert such protections into real AGI robots?

Durk: Yes, but any protections like that won't be in the hardware, but in the firmware or software, and therefore will be reprogrammable or hackable. And there will be difficult situations. If a kid jumps out in front of an autonomous car while a gasoline tank truck drives by in the other direction, the AGI has to decide who dies.

Sandy: The insurance companies will have much to say about how the AI is designed to make such decisions. The algorithm will be focused on minimizing the damage.

John: Can we program “natural law” into AGI robots? At least the first natural law that underlies libertarian and anarchist philosophy… Law 1: Don't initiate force or fraud against a human.

Durk: AGI ought to be able to learn that just by watching people who hold to that philosophy, and such AGI robots will end up being very good indeed. On the other hand, if the AGI robot is learning from people who are crooked politicians, they're going to act like politicians.

Sandy: Ruthless.

Durk: Initially, identical AGI robots could end up being Mother Theresa or a perfect sociopath, depending on who they learn from. If they learn from Stalin, Mao, and Hitler, the AGI will be an even more effective version of them.

Sandy: And therefore, as dangerous as you want to make them.

John: Can an AGI robot have a conscience?

Durk: If it learns behavior from people who have a conscience, then it will have a conscience. As a child, you learned how to interact with people by observation and practice. It would be nice if AGIs learn the small-town mentality where reputation counts and the sociopathic types cannot rely on big-city anonymity to carry on their misdeeds.

John: So we can expect to have good guy and bad guy AGI robots. We need to do a better job of protecting these young AGI robots from the same bad influences that cause problems in our human kids. If we don't keep the government out of their development and regulation, we could well end up with robots and AGIs that learn from bureaucrats and politicians that people should all be treated as sheep.


Reader Mailbag

Today, a reader shares his overall success investing in the legal cannabis market…

Hi Justin, I just read your note about the alcohol manufacturers being concerned about legal pot coming. Also noted some concerned questions from subscribers about the recent pullback in the sector and some of the stocks. I wonder, did they bother to take some of the profits off the top back when the pot stocks were on the rapid run-up at the end of the year and the first week of January this year? Most of the shares that were recommended (and some others I found on my own) were up over 200%; some were up 300%+.

I had let them ride, but once they started to pull back around 10%, I took a Casey Free Ride on all my positions in the sector. In fact, I swept 150% of my original investment off the table, and the remaining Free Ride positions are currently valued at close to the initial amount I put in the sector. So I'm comfortable letting them ride for the long term. No pullback can really impact me at this point (other than going to zero and giving up those paper profits which the Free Ride represents, which sure doesn't seem likely). Keep up the great service.

—Bill

Another shares his outlook on the future of the sector…

I don't think Jeff Sessions is an idiot; he knows that legal marijuana is here to stay. But I also think he does not like having a law on the books that can't be enforced. I believe his stance is an attempt to goad Congress into making marijuana legal or to repeal the federal ban so that there is no dilemma.

On Trump and tariffs: I think Trump knows what effect tariffs will have on trade and that he doesn't really want to go there. Trump the dealmaker is waiting for counter-offers and will agree to a compromise of some sort. This is not over!

—Don

And finally, another response to Doug’s controversial interview on arming teachers

Hi, I’m not really sure how we can solve this rampant misuse of guns issue. A comment was made about an old female teacher with a handgun facing a killer with an AR-15 and how that would work. My take on that is, if you had teachers who were able to have guns in school—and by that, I mean teachers who are trained in gun usage—we’d be in much better shape. In my high school, years ago, at least half of the teachers (mostly male) were hunters who I knew personally and they respected firearms. If I knew they had guns at their disposal during a maniac’s mass shooting, I’d feel a lot better.

Law-abiding people with no means of self-defense are just victims in a case like this. No one should be a victim in school. Put some guns in the hands of good shooters, and I mean good in mind and in handling of firearms. It should be easy for a school to identify people who fit those qualities. I’ve had guns my whole life. I respect them. I haven’t been hunting in 30 years. But I know how to handle a weapon and there are millions of law-abiding citizens who are out there that do the same, and many of those are teachers in schools.

My grandson goes to elementary school. I’d have no problem in having qualified teachers there having access to firearms. If one has to go after a killer, he’ll have more than just his body available.

—George

As always, if you have any questions or suggestions for the Dispatch, send them to us right here.


In Case You Missed It…

Bitcoin has taken the spotlight in the financial world. But while most investors are focused on the cryptocurrency itself, the companies that use its technology will also be big winners in the years ahead.

To learn more about these companies—and how you can invest in them from an ordinary brokerage account—click here.