Published on May 31 2018

Why You Can’t Ignore Italy’s Political Crisis

By Justin Spittler, editor, Casey Daily Dispatch

Italian bonds are getting rocked.

On Tuesday, the yield on Italy’s two-year government bond jumped from 0.8% to 2.7%. That’s more than triple where it ended on Monday.

It was the biggest one-day jump for Italian two-year bonds since 1989.

That might sound like a good thing. After all, Italian bonds now pay a lot more than they did a few days ago.

But you must understand something. A bond’s yield rises when its price falls. In other words, Italy’s two-year bond just had its worst day in nearly three decades.

I’ll tell you why this is such a big deal in a second. But first, let’s look at why this is happening.

Recommended Link

He used to play by Wall Street’s rules…

As a former corporate banker, this gentleman was forced to play by Wall Street’s rules. But now he’s left the fold. He’s reverse-engineered a kind of blueprint that’s analyzed the characteristics of stocks that have returned 10,000% or more. And now for the first time he’s inviting everyday investors to join him on something he calls the 100-to-1 Project. Get all the details here.

• Italy is on the verge of a major political crisis…

At least, that’s what Crisis Investing editor Nick Giambruno thinks. Here’s what Nick—who’s been monitoring the situation in Italy for years—told me in a private email:

One way or another, Italy’s populist parties will come to power, and probably soon. They are already one of the largest political parties and are growing in strength each day.

Once in power, they will ask for concessions from the European Union (EU) that will be impossible to give, such as forgiving hundreds of billions of dollars’ worth of Italian debt and allowing the Italian government to run enormous budget deficits.

If that happens, the entire European Union could fall apart. Nick went on to explain:

Italy is not a marginal economy. It’s the third largest economy in the EU.

So, the EU has a choice to make. It will either accept Italy’s demands, and create a moral hazard that will eventually unravel the euro. Or it’ll reject the demands, at which point Italy will have no choice but to leave the euro. And if Italy leaves, it’s unlikely the euro could survive.

Recommended Link

Man Who Made Millions Predicting the 2007 Housing Crash Places Next Big Bet
The hedge fund manager who made MILLIONS predicting the 2007 subprime mortgage crisis TWO YEARS IN ADVANCE has now made his next big bet. In short: This Wall Street genius has closed his global $900 million fund and is now going all in on one tech breakthrough. Click here and I’ll reveal what’s going on in full.

• The euro is the glue that holds the EU together…

Without it, economic ties weaken, and the whole EU project unravels.

We’re already starting to see the market reflect some of this risk. Just look at how the euro’s fared against U.S. dollar recently.

The euro is down 7% against the U.S. dollar since the beginning of February. It’s now trading at a 10-month low.

If this situation intensifies, the euro will keep falling. So, be sure to steer clear of the euro.

I also recommend buying physical gold. If Nick’s right, many investors will likely shelter in gold because of its long history as a safe-haven asset. You can learn about the best ways to buy and store physical gold in our free report, “The Gold Investor’s Guide.”

Regards,

Justin Spittler
Punta del Este, Uruguay
May 31, 2018

P.S. If you want to profit off Italy’s crisis, Nick’s found an easy way to bet against the euro and Italian bonds. He has all the details in his Crisis Investing letter. You can access these trades, and all of Nick’s recommendations, by signing up for Crisis Investing. Click here to learn more.

Changing gears, I’ll pass off the baton to John Hunt who has a brand-new interview with Durk Pearson and Sandy Shaw on artificial intelligence (AI)…

The Intersection of Artificial Intelligence and Medicine

An interview with John Hunt MD, Durk Pearson, and Sandy Shaw

John: Artificial intelligence is a trend these past several months. There’s a tidal wave of media articles: some fearmongering; many seeking government regulation to prevent AI from being controlled by the evil corporations. As you’ve mentioned before, what could be more dangerous than to have evolving AI learn right and wrong by observing politicians and bureaucrats? If AIs model their thought processes, psychology, and decisions from lessons learned from the political class, they will come to think it is acceptable and appropriate to force humans to do whatever they decide. Terminator Doomsday. But today, let’s talk the positives. AI is coming to the medical profession. What are some of the good and bad things about that?

Sandy: I would like to see an artificial general intelligence (AGI) robotic diagnostic machine for home use. Finding out what your symptoms mean, what may be wrong with you, is probably the most difficult part of medicine. But once you know these things, it is fairly easy for you to locate information on what might be done about it.

Durk: Doctors (and lawyers, too) will become much more productive… well, if the government doesn’t get in their way too much. If a doctor wants help in a diagnosis by using AI, lots of advantages could be obtained as the AI considers diseases that the doctor hasn’t remembered to think about.

John: The FDA assuredly will consider a diagnostic AI computer to be a regulated medical device, under their purview. That will slow things down.

Durk: But progress will be made despite the FDA. And cooperation with AI will make doctors better and more productive. We’ve seen an example of this with human chess grandmasters, who are hugely more capable than they were 30 years ago, and they’ve gotten that good by playing against computers. They can’t beat the computers, but they can learn from how the computers win. Most importantly, a centaur can beat a computer. A centaur is one or more people collaborating with a computer to play chess. Importantly, in a match between a centaur and a computer alone, it’s usually the centaur that wins!

John: And so, likewise, AI can work with physicians to make a team that is better than either alone. That centaur concept gives us hope that—despite the rapid IQ advances—AI may not become the supreme intelligence on the planet and leave us humans in the dust, down the food chain. Maybe the future is the combination of human biological intelligence and AI. As a physician, I rather like that. The electronic medical records keep track of extensive information. There are companies that can now mine the EMR data to identify patients who are at risk of soon needing surgery, before the patient even knows it. Currently, this is a tool of the insurance companies. But this EMR mining also could be used for an individual patient’s benefit. Mining data is something that doctors don’t train to do. But it’s perfect for deep learning and AGI. The machines will learn to identify risk factors and signs and symptoms that a doctor might prejudicially ignore as irrelevant.

Sandy: Possibly the main thing to keep in mind is who is going to make the decisions. After the intelligences of the doctor, the AI, and the patient have worked together to come up with a diagnosis and treatment plan, will it still end up being the insurance company, or a bureaucratic ass, making decisions? Because it may not be the intelligence that actually knows the most or cares the most that decides. In the best of all possible worlds, we would hope that the best knowledge would be used to make the choice, but ultimately, he who pays will decide what he pays for, the doctor will decide what he recommends, the AI will decide what it concludes, the patient will decide what he will accept, and the bureaucrat will just try whatever he can get away with.

John: Which is why it makes so much sense for the wise individual to get control of their medical care dollars instead of doling them off to third-party intermediaries like insurance companies. How else will AI be useful in medical care?

Durk: Machine learning is used to study brain patterns found in functional MRIs. There is the potential to treat anxiety disorders, such as a phobia of moths or snakes. Usually these phobias are treated with exposure to a lot of moths, and this can work. But lots of patients are understandably reticent to be exposed to what they fear. These patients can be taught (using reward mechanisms/biofeedback) to mimic the fMRI brain pattern that AI has found occurs in healthy people when they see a moth. Without having any idea that he is retraining his brain to treat his phobia—without being exposed to moths or pictures of moths—without having any conscious idea of what he is doing at all—the patient’s effort to mimic a healthy brain response ends up effectively treating his phobia.

John: That sounds exciting on the one hand, and also frightening. I wonder if the same AI-fMRI augmented psychotherapy could rewire people’s synapses and become a high-tech form of behavioral control?

Sandy: If it works for phobias and PTSD—with patients cooperating but without awareness of what they are actually doing—someone could, I suppose, say, “just do this biofeedback and you will overcome your stranger anxiety.” But the biofeedback sessions instead surreptitiously train the patient’s brain to obey authority or even wire a bomb to his chest.

John: No doubt an idea for DARPA or NSA? Or maybe Facebook’s next project. As a pediatrician, phone calls from moms about sick children are a challenge. Over the phone, it could take 45 minutes to extract information sufficient to tell whether a baby is in danger from some illness. But if I were to see that baby in person, I can tell within just a couple of seconds whether she is in imminent danger or not. Gut intelligence and experiential intelligence combine with cognitive intelligence to guide me. Let’s say a computer can mimic the sensory inputs, but what about gut feeling?

Durk: A group of computer scientists used a deep learning software by which a computer learned how to tell, based on a handful of photos, whether someone was gay or straight. You might call that a gut feeling right there. Of course, that’s dangerous tech in countries like Iran where you can be killed by the government for being gay.

John: And we cannot readily predict what features in a photo the AI might use to come to its conclusion. I expect that a similar sort of experiential process is going on in my mind when I see a potentially sick child. Sure, there are characteristics I can label as helpful in my assessment, but I bet there is a plethora of input that I am not consciously aware of that helps me decide how sick a child is. In addition to the important ability to observe visually, there are smells and sounds and who knows what else, all contributing to the gut feeling that tells me how fast I have to move to help a sick baby. No electronic medical record or regular computer is going to pull that off. But an AI that has had adequate training and has enough sensory data available could do it even better. It could access an artificial nose (these are available to sniff out gases and work in a way that is quite like AI), infrared information, broader audio spectrum signals. Far more than I have in my personal human sensor array.

Sandy: But we have to wonder whether an AI’s bedside manner will be like a robotic sales call from the Obamacare exchange.

Durk: Another AI tech involves a computer learning how to read retinal exams to determine who’s at most risk for cardiovascular disease.

John: That could prove helpful for screening. Doctors do tend to adopt tech that improves the outcome for patients. Unfortunately, doctors don’t tend to support the individual patient’s access to high tech (or medications), except when it’s going through them. The desire to control others is pervasive in our society, often justified by notions like, “it’s for their own protection.” Doctors insist on keeping control of the prescription pad, compelling patients to work through them to get medications. If supposedly good doctors justify compulsory control over everyone, I wonder how much medical AI will learn to forcibly protect humans from themselves?

Sandy: We won’t know how AGI are going to “feel” about their intelligence and their relationship with people until we can communicate with them about it. Even then, we may not be able to tell when they’re lying. Making it worth their while to engage in a deception-free relationship, just as we would with another human we value, seems like the best approach.

John: So once again we see the importance of modeling honest and honorable behavior so that the AI learns to be a “good person.” Unfortunately, the way things are going, AI in medical care will be learning from insurance company clerks, bureaucrats, and politicians, with a smattering of input from doctors who may themselves be too eager to control people. Technology is advancing at warp speed while moral wisdom is sinking into a swamp. That unfortunate combination makes me wonder if the coming singularity might end up being an extinction-level event.


Reader Mailbag

Light mailbag today…

Are you planning on buying more physical gold after reading today’s update on Italy’s political crisis? What did you think of our featured interview on AI? Let us know your thoughts, including any suggestions you may have for the Dispatch, right here.


In Case You Missed It…

In 2006, Doug Casey made one of his best plays—and made 86,900% gains on an obscure penny stock.

It was no accident. In fact, tiny unknown stocks skyrocket every single day.

Until June 15, you can get his private blueprint for profiting from these stocks… including the investment opportunity that could be Doug’s next huge recommendation… and a massive win for Doug’s readers… Click here to get all the details.