Dan O’Dowd was not widely known until on Jan 16, he purchased a full page ad in the Sunday New York Times titled “Don’t be a Tesla TSLA Crash-Test Dummy” which was extremely critical of the deployment of Tesla’s prototype “full” self-driving test, driven by ordinary customers. The ad was purchased to launch his new “Dawn Project,” devoted to making sure the software in critical systems and cars becomes secure and bug-free.
His larger complaint is over all the insecure critical infrastructure out there, not just cars. As we enter a world where a powerful cyber-intrusion actor like Russia is capriciously entering into war and potential conflict with the west, this threat is far from hypothetical.
O’Dowd is the CEO of Green Hills Software, which specializes in software for “embedded systems” — built-in microprocessors dedicated to hardware — with customers in the aerospace, military and automotive realms, among others. This, he says, has made him a billionaire. He now wishes to devote serious resources to the cause of his Dawn Project. In taking on Tesla, he enters a highly polarized fray, as Tesla has large legions of loyal supporters and critics. There are the following prongs of his message worth examining:
- Tesla’s FSD system, along with most other driving systems, are written using insecure and buggy “consumer grade” software methodologies, and thus present great risk for failure and computer intrusion.
- This fault is also present in the software controlling many aspects of our critical infrastructure.
- Industry has largely accepted that software which is perfectly robust against these problems is impossible, and continues to deploy poor software.
- That O’Dowd has developed methodologies to write software that has no bugs and can’t be hacked.
- That security can’t be proven by testing, only through development methods.
- That he appears to be the only person in the world who knows how to do this, and while it’s not easy to learn his methodology, he wants to build a team trained in it.
- That he can back up his claim based on his history of making such software in the classified military and aerospace realms, though it is difficult to prove this because of the secrecy in those realms.
- That he can also back it up through a new basic-function phone he has built which he claims provides perfectly secure calling and messaging which he plans to offer to people.
- That the answer is never “black-box” software such as that produced with machine learning techniques such as deep neural networks.
- In cars, he expects to solve the problem with classical algorithms using his methodology, as well as sensors in the infrastructure and communication among the cars and infrastructure.
We are doomed?
There is not much dispute in the security world that a large number of important systems have serious security flaws. Indeed, the question that is more puzzling is why the world works at all with all these insecurities. O’Dowd feels that we are at serious risk in the event of actual conflict with powerful nation-state actors who can compromise our systems, where they could cause disruption, shut down essential services, or in some cases, destroy important assets or even harm people. Intelligence agencies are regularly compromising systems all over the world, though primarily for espionage purposes. Criminal actors also do it, usually for profit with things like ransomware.
The warning for some time has been that if conflict ever gets serious, we’re all in a lot of trouble. This alarm has been sounded for several decades, but the systems have not improved and in many cases have gotten worse. Now, with a major cyber-power engaged in war, the cause for concern is even higher.
With cars, there are several risks, all the way up to an enemy taking over control of cars to cause accidents and harm both occupants or others on the road, or in extreme cases, anywhere a car can reach. There is also significant harm that can come from just snarling up roads and transportation systems. O’Dowd won’t get much argument on his first 3 points, though many companies claim they are doing good security.
While O’Dowd makes a number of hyperbolic statements, the risk that is present is real. Other computer security researchers generally concur. Who can solve it?
You would get more debate about whether O’Dowd’s methodologies are the answer, and whether he is the only person who knows what to do.
O’Dowd claims he has built hack-proof, bug-proof systems for his customers. The problem is, his customers can’t talk about it or won’t because of a culture of not doing so, even after some things are declassified. That’s going to put him in a hard position. He says he’s been doing it for 25 years, making fighter jets, working on top secret nuclear programs. O’Dowd says he’s been trying to convince the auto companies, some who are his customers, that they can have security, and trying to convince many government agencies of it, but they all say that it’s not practical and not going to happen.
He says he understands the hubris of his claim of being the one who knows how to fix things, and understands that people will discount him because his claims are too extreme, has been told that “a million times,” but says he took that advice and tried the humble way first, and nobody paid any attention, so he’s left only with making the grander claims.
So instead he is funding the Dawn Project, and plans to recruit a team of top programmers and train them in his methods to make it happen.
The project is not a non-profit. It will charge for services and Green Hills Software may get business too. As such, a number of critics accused O’Dowd of a conflict of interest in his campaign against Tesla. Some Tesla fans are sadly fond of accusing any who disagree with them of ill motives, and O’Dowd would probably put forward a more acceptable face if this were a non-profit, but at the same time my judgment in talking to him at length was that this was not some elaborate ploy to generate revenue, not even remotely. There are better ways to do that.
Can you make things secure and bug free?
O’Dowd has worked in an area where security is vastly more valued. People have made fairly secure systems in the military world — though there are also many insecure systems there. One reason they can do this is they are allowed to constrain users far more to follow good procedures. Most security breaches in software come not from software flaws but from human errors and “social engineering” attacks where an authorized user is tricked into helping somebody unauthorized get what they want, or get access to do more. In the military world, if the commanding officer has dictated good policies, it is more likely they will be followed.
O’Dowd’s offered phone is an example. He claims the phone is perfectly secure for calls and messaging, but that’s because that’s all it does. He expects people who carry it will also carry a regular smartphone to run all those other apps. In the past, it was common for people to carry 2 phones or even 2 laptops, one secure and one not, and they should return to that if they really want their communications to be secure.
Real world users unlike military users, can’t be told what to do. And there is a pressure to be insecure because secure is often harder to use, and the easier product wins in the market, leaving the secure product without the funds to keep up. The users may not be wise in this decision, but it is what they do. As such there is considerable skepticism about the claim that the Dawn Project, and only the Dawn Project, can make unhackable, bug-free systems.
Moshe Shlisel, CEO of Israeli automotive security company Guardknox has a similar history to O’Dowd with experience on the Iron Dome and military fighter jet software security is skeptical that anything can be intrusion proof. “Claiming it can’t be hacked is a contradiction in terms,” says Shlisel, though he does believe his company can provide adequate security in cars with the right architecture. “The defense line will always be penetrated.” Shlisel asserts that security requires a design methodology built “with the assumption it will be penetrated.” Shlisel works with Green Hills Software and has great respect for them, but names many other areas of vulnerability that are outside the spheres they have worked in.
One of the most promising technologies for security software are operating systems that use capability based security. This technology has been developed for some time, but requires new operating systems and rewrites of most software, which has been a hard goal to attain. Mark Miller, chief scientist at Agoric, which is building secure programming tools for smart contracts using this approach, worries that securing “complex systems necessarily involves principal-agent problems” which occur when a principal (like you, or even a car maker) has to trust an agent with security. To really trust a system, you have to know as much about its security as the agent, which reduces the value of the delegation. “These problems are fundamental, you don’t solve them.” says Miller. “You manage and mitigate them.” “Zero Risk,” he says, “is security snake oil.”
Black Box AI
O’Dowd is particularly concerned about the growing use of “black box” AI systems, such as machine learning trained neural networks. The programmers do not understand how they work, and fix bugs simply by giving more training data until a given mistake is no longer made. Security and reliability can’t be proven through testing, says O’Dowd, even if they can be improved.
This is a frequently discussed issue, and an entire sub-field known as “explainable AI” has arisen which attempts to peer inside the black box, but never perfectly. This concerns people, and there are even laws in Europe that demand explainable AI in certain fields. The concern is so great that when I ask people, “If you have a choice between two systems, one which is explainable, and the other which is not but in testing shows as being twice as good at safety as the first, which would you choose?” I often get people saying we should use the explainable, less-safe system. O’Dowd is one of them.
O’Dowd is such a strong advocate, and believes that use of opaque machine learning systems should not be permitted in critical applications. That’s going to be a major battle for him because machine learning approaches have shown themselves to be immensely valuable, and responsible for a major sector of the important breakthroughs recently in many fields, including robotics. Some believe they are the only likely solutions to a number of hard problems. They will not give them up without a major fight, especially in self-driving cars.
Some developers believe they will make self-driving systems with “end to end” machine learning — where the cameras go into the neural network, and commands to turn, accelerate and brake come out the other side, with a fairly black box in the middle. Others are not so aggressive, and use machine learning tools to handle sub-tasks such as identifying obstacles in the camera view, predicting what others will do or how to respond to their movements, or all 3, but with regular software gluing them together. It’s very rare to find anybody not planning to make some use of this technology.
O’Dowd’s self-driving plans
O’Dowd takes a fairly old-school approach to self-driving, with no black box code, and making heavy use of communications between cars and other cars as well as infrastructure. This approach was very common in the early days and is largely abandoned. The use of communications is often talked about in academic circles and at car OEMs but was never considered likely among the high-tech companies that are the leading teams. Relying on communications not only means depending on systems outside your control to be reliable, accurate and not trying to hack into you, it also requires solving a major “chicken and egg” problem that will never get completely solved.
O’Dowd feels his techniques to make secure software can allow the communications to be secure. While others aren’t so sure perfection is likely, they believe that minimal communications can be secured. My own analysis shows that the use case for direct vehicle-to-other communications is extremely limited, which makes it hard to justify opening such a large “attack surface” for little gain, even if you have good faith in your security. Just considering that 1/6th of insurance claims are from collisions with deer, and it’s fairly hard to get all the deer to wear transponders suggests that systems have to be extremely good just on sensors with no communication, so the extra gain of communications had better be tiny or something very wrong is going on.
Taking on Tesla
O’Dowd has chosen Tesla as his first target. It’s a very visible target, to be sure, and the subject of much controversy already. He has issued fairly hyperbolic statements, calling the Tesla FSD prototype the worst product ever released by a major company and saying that if it were released and popular “millions would die every day.”
The Tesla FSD prototype is certainly very primitive and highly unreliable as shown in our prior reviews. At the same time, with over 50,000 people using it, there are no reports of serious accidents, and only a handful of very minor ones, which is a pretty decent safety record. Indeed, the “safety driver” approach, where human drivers oversee prototype systems has worked extremely well, with the one tragic exception involving a negligent Uber UBER -3.6% safety driver who completely ignored her job. It seems to even work with untrained customers doing the supervision, and is at worse only slightly worse than the record of regular human driving, or similar. In spite of quite a bit of driving going on, it does not appear anybody has been hurt, and we don’t want to prohibit or interfere in things unless people are getting hurt (at a level beyond the hurt of normal driving, that is.)
No system like FSD prototype would ever be released for unsupervised driving (whatever Elon Musk may promise) nor would people trust it with their lives, so the claim of millions of deaths is pure hyperbole. There is actually more concern when a system gets good enough that people actually would trust it when they are not supposed to, resulting in what is called “automation complacency.” This complacency has led to the deaths of users of Tesla Autopilot, though again evidence suggests the frequency of this is not great, and certainly far from catastrophic.
Should a time come when Tesla FSD needs an intervention only once a day instead of once every few minutes, there might be different results and a call to change course. Until then, Tesla seems an odd target, when the insecure critical infrastructure is a much more pressing problem. That’s already in use.
This is not to say that cars don’t need to be secured, and that car companies should not have already gotten to work on that. Many of them have, though not with O’Dowd’s approach. Eventually, insecure cars could be taken over by criminal actors for ransomware, terrorists for disruption of roads, and advanced state intelligence agencies for warfare, both disrupting traffic and even harming occupants and road users.
Car companies should be working to severely limit the attack surfaces on vehicles (the communications channels through which an attacker could enter) and making the vehicles as secure internally as they can. They also must make their own headquarters secure, since the cars will be communicating with the headquarters and even receiving software updates from them. They must do so even presuming that foreign spy agencies have planted agents among their technical staff, and secure all systems so that no one or even several individuals on the inside can mount a serious attack.
However, while Teslas and a few other cars could, if compromised, be used for dangerous purposes during a war with a country like Russia, other infrastructure is currently more dangerous if compromised.
O’Down plans much more in his campaign until he convinces or forces Tesla to withdraw their prototype until it is in a vastly better state, testing it only with paid employee drivers like all other companies. He also hopes to stamp out the use of machine learning in critical systems, including cars. He has a long fight ahead of him.
The original article is available here.