Justin’s note: On Sunday evening, Elaine Herzberg was struck and killed by one of Uber’s autonomous vehicles in Tempe, Arizona. She became the first person to die by autonomous vehicle.
In the wake of this tragedy, Uber suspended self-driving car tests in four North American cities. Distrust of self-driving cars is also starting to spread.
But you should understand a couple of things before you develop an irrational fear of this technology. Number one, the local police have said that Uber is likely not at fault in this accident. More importantly—human drivers are a much bigger threat to your safety. According to the Association for Safe International Road Travel (ASIRT), over 37,000 people die in road crashes each year in the U.S. alone.
Still, I realize that many people won’t get over the scary headlines. So today I’m sharing a brand-new piece from Dr. John Hunt, Durk Pearson, and Sandy Shaw—the newest members of Casey’s brain trust. In this timely conversation, they explain why you shouldn’t fear autonomous vehicles (and other robots)…
John: I own a limousine/airport transportation/bus business in Virginia. Self-driving cars are coming. My company needs to prepare, as will its chauffeurs, and other drivers throughout the country.
Durk: Things are going to keep changing in that field. Look at a company called Waymo. It has fully autonomous cabs operating in a suburb of Phoenix right now in beta testing. No drivers at all. You call them on your smart phone and a car with no driver comes and takes you to your destination. It only drives in well geofenced areas that the company has tested to assure the vehicle is capable of navigating. And not in snow or ice or fog, but there is not much of that in Phoenix. And it doesn't drive in construction zones, not yet.
But as far as bad weather goes, like driving on ice, a computer is much better suited for that than a human being. The computer can see by millimeter wave radar that can penetrate fog and rain and snow, and it has a much faster reaction time. And it will choose the right reaction, like steering into a skid instead of away from it. A computer isn't going to panic and make the mistake of steering in the wrong direction.
Sandy: How you should steer and brake in a given situation is all mathematical. The computer has access to all that information and a CPU that can do the math at a very high speed.
Durk: Right behind Waymo in terms of technology is General Motors. It’s planning on coming out with a car in 2020 that won't have a steering wheel. Perhaps they should sell it as a status symbol: a personal limousine with a robot chauffeur. You're the king and the servant is driving the car. That makes it high class. It'll be able to charge at least $20,000 more. The trucking industry is going to be affected too. “Truck driver” is the most common job in 12 states, and Uber self-driving trucks are out there now.
Sandy: Not having a steering wheel, people may be a little upset. But without a wheel, the cops can't bust you for driving drunk. Not having a steering wheel may suggest to some people a lack of control. Learning to trust the artificial intelligence (AI) that actually controls the steering will take a bit of getting used to, much as passengers on commercial airlines trust the computerized controls that largely pilot the aircraft.
Durk: The success of efforts to hike the minimum wage is going to drive the premature adoption of automatic equipment and AI and robots. That's unfortunate, because the faster they are adopted, the more economic dislocation and emotional pain there's going to be, and the more social and cultural dislocation and unrest will occur. Such as in your company’s chauffeurs. Every government intervention will drive this change to robots and AI faster than a free market would. And perhaps too fast.
Sandy: Because government policies like minimum wage increase the cost of labor. You have to compare the costs of labor with how robots will save money.
Durk: Consider someone who owns a fast food joint. If they hire a new employee to clean the floor and countertops at a minimum wage of $15/hour, that's $30,000 per year, plus SSI, MC, workers’ comp, and unemployment insurance. So, you could pay at least $140,000 for that robot and still see a 25% internal rate of return compared to an employee.
John: And pay it off in four years. Actually, much less, because the robot can work up to 24 hours a day, 7 days a week and might be the equivalent of FOUR employees.
Sandy: A robot doesn’t take time off to go to the bathroom. It doesn't get stoned. It doesn't get drunk. The robot is actually better than a human being for that job. It's unlikely to get sick as often as a human. It will be meticulous, with special tools on its arms like an ultrasonic blaster. The surfaces will be sterilized like an operating room.
Durk: Another government policy is going to accelerate this adoption too. Early tech adoption decisions by businesses are made at the margins. Suppose you've got 49 employees. With the Obamacare employer mandate in place, if you hire another employee, suddenly you have to pay $35K per year for the new employee, plus many thousands per year for medical insurance for each and every one of your current 49 employees. So, it costs you maybe $185,000 per year to hire that one additional employee.
On the other hand, if you buy a robot as your 50th staff member, you could pay $750,000 for it, and over four years, still come out ahead. A cleaning robot is much simpler than a fully autonomous car, plus a cleaning robot is much smaller and requires about 1/15th the material inputs of a car. When mass produced, a cleaning robot will be cheaper than self-driving cars and cheaper than employees. And OSHA can have no excuse to inspect a robot-operated factory.
Sandy: Robots provide for an increase in productivity, and that will save money. And that money can be used for further expansion of productivity and increasing the products that you offer. Studies have been done that don't find a tremendous takeover of human jobs as the robot economy enters, because the robots end up making more capital available for investment. That leads to more productive human work as well.
Durk: A lot will depend on regulatory barriers, particularly licensing. For example, if Joe the handyman has to have a handyman's license for his robot, as well as an owner’s license in order to have a robot assistant, that will be a big barrier to Joe the handyman… but not much of a barrier to the big construction company owning a robot. Licensing laws will be a barrier to the small-scale ownership of the means of robotic production.
John: And that's an essential point. Robots can help us all work less and enjoy life more, but government rules and laws can lead to robotic wealth concentration that wouldn't otherwise occur.
Sandy: The thing about robots that look like people is that there is an excuse to regulate them like people, with licensing, minimum wages, etc. It may be better to have robots that don't look like people, but more like conveyor belts with arms such as you see in a fully automated indoor lettuce farm in Japan.
Durk: Having a robot that walks on four legs and has 10 arms would be invaluable. Want to put dry wall on the ceiling? Hey buddy, come over here!
Sandy: Give it a pinhead size head to make it look like a real idiot.
Durk: Right, R2D2 is not scary. But if a robot looks sort of like a human, it confuses the human brain into wondering, is this thing dangerous? Think how nice it would be to have a robot butler that is also a bodyguard with a reaction time of less than a microsecond. Think about hand-to-hand combat. Don't bring a knife to a gunfight. Well, don't bring a mugger to a robot fight. And if the robot has a gun, boy is that mugger outclassed. We are talking 500 millisecond human reaction times compared to 1 microsecond for the robot. The robot is 500,000 times faster.
Sandy: Wars might get sanitized, though, by having the robots go in there and fight it out. Then it becomes so easy to have wars.
Durk: For wars, robots can be programmed to take bigger risks and can carry much more body armor and they don't stay dead. Indeed, minimizing civilian casualties was one of my reasons for working on our Strategic Defense Initiative; there are very few innocent bystanders in orbit. Moreover, space warfare is very expensive, and the more something costs, the less is bought. Indeed, the USSR went broke trying to pay for space warfare, which was a core part of the 1980s SDI strategy.
Robots can help you to avoid civilian collateral casualties. Currently, an RPG gets used to kill the terrorists and whatever children and hostages that may be in the room. But a robot goes in there, bullets bounce off of it, it kills the terrorists and nobody else. So, there are advantages of robot soldiers. One disadvantage is that robots are going to be centrally connected and call home all the time, like Windows 10 does. If you think government surveillance is bad now, it's going to be a lot worse with robots. You buy a late model car, and it's already continually sending back its location and driving data to the car manufacturer. With autonomous driving there is a good excuse for it, but there isn't an excuse now.
John: As the robotic revolution continues, there will be lots of protests about robots taking people's jobs. Lots of blame. But robots will take the repetitive, boring, AND DANGEROUS tasks mostly. So humans will be able to work just five hours per week instead of 40 hours per week to produce the same amount of goods and services. That leaves 35 more hours per week to goof off, philosophize, study, advance and grow, instead of being consumed by drudgery.
Durk: I agree wholeheartedly. The model is what happened in home labor. 100 years ago, the average woman spent 15 hours per week washing clothes, doing backbreaking work, lugging water, rubbing on a washboard, wringing them out to dry, hanging them. Then washing machines came along, and freed up so much time.
John: A washing machine is a robot.
Durk: Yes, a special purpose stationary robot. So too, the people will need to see how robots will benefit them. But really, they need to own and benefit from this new robotic means of production. If regulatory and licensing laws start to interfere with the little guy owning robots, that will make the cultural transition problem much bigger.
Unfortunately, we can pretty much count on interference by government in a person's ability to sell the labor of their robot to others. And the government interference will make it more difficult for individual people to own this new means of production.
Larger companies will be able to get robots, but government license requirements and rules will prevent individuals from doing so. There is always one small business that you can enter that doesn't require a business license. It's called selling dope. That's one reason that particular business is so popular.
John: One way we've envisioned robot worker ownership by individuals is that people might spend the first decade of their career working to save enough money to buy a robot that will do their work for them for the rest of their days. And the human will get the pay while sitting back and enjoying life.
Durk: Or a worker could finance the purchase of the robots, and maybe even with poor credit, because a robot could be repossessed like a pickup truck. I suspect that, as long as there aren’t licensing and regulatory barriers, we're going to have a lot of people buying robots on time, and using them in their small businesses.
John: That could help dilute a fear that the robot workforce will concentrate into the hands of those with the most capital, or the so-called 1%.
Durk: True, as long as the government doesn't make it difficult for people to own robots or to sell the work of their robots. If they do interfere, like with licensing requirements, the little guys will get screwed, as they always do.
Today, kind words for Doug Casey:
I have read and listened to Doug since I was in college (over 30 years ago). He hasn't shaped my thinking, but has reinforced my beliefs and has articulated what I think beyond my oratory skills. It has been refreshing to read and listen to him through the years and I am thankful for his bravery and willingness to say what he thinks. At an early age I always thought I was the only one who thought a certain way, and was pleased when I came across Doug and realized I wasn't the only rebel.
And thoughts on his recent interview on breaking up the tech giants:
I have followed Doug for years and agree with him nearly 100% of the time. Keep it up. If unregulated, which may be possible to some degree, I think the blockchain technology can offer a good start to getting us away from so much government control. What does Doug think about that?
I enjoy reading Doug's thoughts on the world at large and want to thank you for that. I'm also very intrigued by Doug's most recent subject and comment: "At some point, we should discuss how the decline of the US is reflected in its coins and currency." I would love to hear Doug's take on this! Keep up the good work. Thanks.
Finally, a reader responds to last week’s interview with John, Durk, and Sandy: “Will Artificial Intelligences Be Good Guys or Bad Guys?”
I really enjoyed the discussion of AIs. I can see a lot of ways this might go, but I really have a hard time seeing AIs educated in isolation, i.e. in the image of a particular person. The AIs themselves will communicate with each other, and I would expect them to not fall prey to "an unwitting tendency toward self-destruction", which tends to be the hallmark of much evil in the world. I would expect AIs to take a longer perspective, especially with regard to "if something cannot go on forever, it will stop."
One of the things that we have not seen thus far are computers that "want" things, except insofar as their programmers have instilled that desire. Clearly, as long as the programmers are in charge of the motivation or objectives of computers, the good or evil of the programmer will prevail. If truly intelligent computers arise, however, it is hard to see how an entity far more intelligent than any human would allow humans to dictate its motivations.
Even a benign AI might make decisions that would cause us to recoil in horror, largely because it would not be subject to our emotional biases, but it really comes down to what AIs might determine their "purpose in life" to be. About this, we can speculate, but it's hard for me to imagine how we might control it, especially as AIs work to develop even more advanced AIs.
If you have any questions or suggestions for the Dispatch, send them to us right here.
Live Crypto Q&A This Wednesday
Last week, crypto expert Teeka Tiwari held a live emergency broadcast—"The Second Boom: How to Make a Fortune From the Next Crypto Run Up."
He talked about the recent volatility in the crypto space, one of the biggest opportunities emerging in blockchain technology right now, and how anyone can turn even a small stake of a few hundred dollars… into tens of thousands.
If you missed this event, don’t worry. Teeka agreed to host a live Q&A this Wednesday at 8 p.m. ET where he’ll be answering anything you want to know about the upcoming boom. It’s completely free—all we ask is that you register in advance. Please take a moment to do so here.