from the doesn’t-seem-particularly-trustworthy dept
Just days after a jury found Tesla partially liable in a fatal Autopilot crash and ordered the company to pay over $200 million, Elon Musk took to Twitter with a bold proclamation: “Teslas can drive themselves!”

The timing couldn’t be worse. Because thanks to a devastating article by Electrek’s Fred Lambert that digs deep into the trial transcripts, we now know just how far Tesla went to hide the truth about what happened in that crash. The company systematically withheld evidence, misled police investigators, and actively obstructed efforts to understand how its technology failed—behavior that looks suspiciously like criminal obstruction of justice, yet somehow apparently carries no criminal consequences.
This isn’t just about one lawsuit. It’s about how Tesla’s behavior threatens to undermine public trust in autonomous vehicle technology at precisely the moment when that trust is most crucial.
Let’s be clear: self-driving technology has enormous potential to save lives. Human drivers cause roughly 94% of serious traffic crashes, according to a decade-old study by the National Highway Traffic Safety Administration. Even imperfect autonomous systems could dramatically reduce that toll, and we shouldn’t hold them to an impossible standard of perfection.
But here’s the problem: overselling what these systems can actually do—and then covering up when they fail—threatens to poison public acceptance of the technology entirely. If people lose trust because companies like Tesla made promises they couldn’t keep, we could end up rejecting technology that might otherwise save thousands of lives.
The aviation industry figured this out decades ago. When planes crash, investigators swarm the scene, companies cooperate fully with authorities, and the entire industry learns from failures. That transparency has made flying extraordinarily safe. But Tesla’s approach in this Autopilot case shows the exact opposite mentality.
The Electrek story, based on trial transcripts from the recent case, reveals a pattern of deception that’s genuinely shocking. Here’s what Tesla did:
Within three minutes of the fatal crash, the Model S automatically uploaded a complete “collision snapshot”—video, sensor data, everything—to Tesla’s servers, then deleted the local copy. Tesla was the only entity with access to the critical evidence.
Within about three minutes of the crash, the Model S uploaded a “collision snapshot”—video, CAN‑bus streams, EDR data, etc.—to Tesla’s servers, the “Mothership”, and received an acknowledgement. The vehicle then deleted its local copy, resulting in Tesla being the only entity having access.
When police investigators tried to get the data, Tesla’s lawyer literally scripted their evidence request. As the homicide investigator testified:
“He said it’s not necessary. ‘Write me a letter and I’ll tell you what to put in the letter.’”
But the lawyer deliberately crafted the letter to avoid sending the actual crash data, instead providing infotainment logs and owner’s manuals.
McCarthy specifically crafted the letter to ommit sharing the colllision snapshot, which includes bundled video, EDR, CAN bus, and Autopilot data.
Instead, Tesla provided the police with infotainment data with call logs, a copy of the Owner’s Manual, but not the actual crash telemetry from the Autopilot ECU.
Tesla never said that it already had this data for more than a month by now.
When police brought the car’s computer to a Tesla service center for help extracting data, Tesla technicians falsely claimed the data was “corrupted”—even though they had the complete dataset sitting on their servers the entire time.
For years, Tesla told courts and plaintiffs that the crucial collision data “didn’t exist.” Only when forensic experts finally gained access to the car’s computer and found metadata proving Tesla had the data all along did the company finally admit what it had done.
As Electrek reports:
The automaker had to admit to have the data all along.
During the trial, Mr. Schreiber, attorney for the plaintiffs, claimed that Tesla used the data for its own internal analysis of the crash:
“They not only had the snapshot — they used it in their own analysis. It shows Autopilot was engaged. It shows the acceleration and speed. It shows McGhee’s hands off the wheel.”
Yet, it didn’t give access to the police nor the family of the victim who have been trying to understand what happened to their daughter.
Just reading through the summary Electrek wrote about the timeline is horrifying and raises obvious questions about why there’s no criminal liability here:
- Tesla had the data on its servers within minutes of the crash
- When the police sought the data, Tesla redirected them toward other data
- When the police sought Tesla’s help in extracting it from the computer, Tesla falsely claimed it was “corrupted”
- Tesla invented an “auto-delete” feature that didn’t exist to try explain why it couldn’t originally find the data in the computer
- When the plaintiffs asked for the data, Tesla said that it didn’t exist
- Tesla only admitted to the existence of the data once presented with forensic evidence that it was created and transfered to its servers.
When the collision data finally came to light, it painted a damning picture. Electrek’s summary of the forensic analysis is quite something:
- Autopilot was active
- Autosteer was controlling the vehicle
- No manual braking or steering override was detected from the driver
- There was no record of a “Take Over Immediately” alert, despite approaching a T-intersection with a stationary vehicle in its path.
- Moore found logs showing Tesla systems were capable of issuing such warnings, but did not in this case.
- Map and vision data from the ECU revealed:
- Map data from the Autopilot ECU included a flag that the area was a “restricted Autosteer zone.”
- Despite this, the system allowed Autopilot to remain engaged at full speed.
That last point is crucial. Tesla knew this wasn’t an appropriate place for Autopilot to operate, but the system didn’t disengage or warn the driver. The NTSB had specifically warned Tesla to “incorporate system safeguards that limit the use of automated vehicle control systems to those conditions for which they were designed.”
Tesla appeared to ignore that recommendation.
The jury found that the driver in this case bears primary responsibility—he admitted to being distracted and not using Autopilot properly. The jury assigned him 67% of the blame. But they also found Tesla 33% responsible, and that matters.
As Electrek notes:
However, there’s also no doubt that Autopilot was active, didn’t prevent the crash despite Tesla claiming it is safer than humans, and Tesla was warned to use better geo-fencing and driver monitoring to prevent abuse of the system like that.
This case (unlike some other stories about autonomous vehicles) isn’t about punishing innovation or holding technology to impossible standards. It’s about holding companies accountable when they oversell their capabilities and then actively obstruct efforts to learn from failures.
Tesla’s behavior in this case—the years of lies, the misdirection of police, the withholding of critical evidence—represents everything wrong with how some tech companies approach safety and accountability. It’s the opposite of what we need to build public trust in autonomous vehicles.
Self-driving technology can eventually make our roads safer. But getting there requires companies that are transparent about their systems’ limitations, cooperative with safety investigations, and committed to continuous improvement based on real-world data.
Tesla’s cover-up in this case shows a company more interested in protecting its stock price (the biggest source of Elon’s wealth) than protecting lives. And Musk’s tweet claiming “Teslas can drive themselves” just days after this devastating evidence came to light shows he’s learned nothing.
If we want autonomous vehicles to fulfill their life-saving potential, we need companies that act more like airlines after a crash investigation (full transparency, immediate cooperation, system-wide improvements) and less like Tesla in this case (cover-ups, obstruction, and doubling down on dangerous claims).
The technology itself isn’t the problem. The corporate culture that prioritizes PR over safety is.
Filed Under: autonomous vehicles, autopilot, crash data, data, elon musk, liability, self-driving, transparency
Companies: tesla