Can AI Help Solve the JonBenet Ramsey Case?
Will AI Be a New Crimefighting Tool or a New Can of Worms?
True crime aficionados love to hear stories about unsolved mysteries, all while secretly harboring the rather exaggerated idea that perhaps they can help the detectives solve the case. And in the case of some unsolved mysteries, armchair detectives can see (or imagine to see) unmade connections or missed clues.
True crime stories are very popular on the internet—think of the murder of the little beauty queen JonBenet Ramsey, the six-year-old pageant contestant who was murdered on December 26, 1996 in her home. Her murder has never been solved and she’d be 35 years old if she were alive today. Even the basics have never been sorted out in the Ramsey case. For instance, the murderer had to be one of two groups of people: either he (or she) was somebody already in the house or it was somebody who entered the house. Even that’s not firmly established. The little bit of DNA that was found on the girl’s body didn’t match anyone in the family or any known suspect. Some people have argued it may have been DNA from the factory worker who made her underwear.
There are lots of unsolved mysteries. Was Marilyn Monroe murdered or did she die accidentally? Or did she commit suicide? And what about the gaps in the Charlie Kirk murder? Don’t even get started on going through the attempted assassination of Donald Trump at Butler, Pennsylvania.
Meet true crime’s newest friend: AI. We’ll soon find out how well these two things play together. And it’s not just detective work where AI is making inroads. It’s everywhere.
The AI Detective
The latest inroad or perhaps I should say intrusion of AI into our lives is that AI is now being equipped to help solve crimes. This isn’t just behind-the-scenes AI like might exist at a DNA lab or a fingerprint lab to help process images or analyze huge amounts of data. I found a harbinger of this trend in a small but publicly available site called CrimeOwl AI.
The premise of CrimeOwl is brilliant. Criminal investigation used to be very much an analog affair. For centuries, well, decades at least, overworked detectives have had to jot down their notes by putting pencil to paper on rumpled pads. Even now, suspects are asked to write out their recollection of events by hand on those big yellow legal pads. Lists, notes, photographs are gathered in the pursuit of clues; these pieces of evidence are then stuffed in manila folders which are then stuffed in banker’s boxes, and ultimately stuffed into some dusty warehouse.
These pieces of evidence were subject to getting lost, torn, crumpled, misplaced, or forgotten. They are relics of a forgotten age: notes, paper documents, snapshots.
And you can’t open up a box of evidence in a criminal case and search it easily for one specific name or one specific date. It was—and still is—done largely by hand.
Some cases—even relatively straightforward cases—generate hundreds if not thousands of individual pieces of evidence, mostly in the form of reports, photos, and notes. In the case of JonBenet Ramsey—a world-famous case
There are 2,500 pieces of evidence
There are over 40,000 individual reports that total over a million pages
Not to mention physical evidence (including handwriting samples, a baseball bat, bedding, and a suitcase)
A small amount of this evidence is digitized, but most is not
That’s a lot of stuff to process manually.
That’s where AI comes in. What if you could feed all of this information into AI and have AI organize it? AI doesn’t solve the crime, but it can organize the data, create timelines, notice patterns, consolidate information about specific people. Want to find out quickly where Ramsey family friends were that December 26, 1996? AI is faster and more reliable than sending some poor detective to the records room.
Further, CrimeOwl is allowing for people to submit their own tips and information. The site lists various cold cases and unsolved crimes, provides a timeline of key verifiable events, offers maps and list of “leads” and solicits information.
Unsolved Crimes
If a crime is committed and not solved quickly, it becomes a “cold case.” In the United States, only 1% of police agencies with 50 or fewer officers have a unit dedicated to solving cold cases. Among police departments with 100 or more officers, just 18% have a cold case unit. That means in many parts of this country, when a case goes cold, there is no one left to pursue it. It has no logical place to go but into the warehouse of lost records.
And only 20% of our police agencies have established official protocols for how to manage cold cases.
In the movies, cold cases are mostly solved because of some indefatigable old detective who just can’t let a certain case go. In reality, there aren’t that many indefatigable old detectives.
Families sometimes turn to private investigators to keep after a cold case, but this is an expensive solution that does not always produce results. Using AI can help these private investigators to organize data quickly, allowing potential connections or leads to emerge from the raw analog information. It may make it cheaper for private investigators to handle cold cases.
This could be important. We need help.
Medical examiners and coroners get about 4,400 unidentified bodies a year which are typically called “John Doe” or “Jane Doe.” Some are eventually identified but roughly a thousand a year are not
About 600,000 persons are reported missing every year; many are found. But tens of thousands become cold cases after they are missing for over a year. (This field is so overwhelming, we don’t even keep good numbers on it)
Every year 300,000 murders in America are unsolved
Other AI Niches
Crime-fighting is just one new application for AI. It’s already being used in a lot of ways that are both exciting and alarming. AI is profoundly unsettling in some ways and delightfully utilitarian in others. Here are some other ways we’re already using AI
Most of us know that AI can write reports, academic papers, children’s books, and even novels. I’ve heard some Hollywood types say that AI will soon be “helping” screenwriters as well and by “helping” I guess I mean replacing
In India, there is an AI that people use when they want to talk to god (GitaGPT). This seems disturbing
SermonAI can “streamline sermon preparation” by actually letting lazy pastors off the hook and writing their sermons for them
You can get an AI therapist (this is different than telehealth which just connects you to a human therapist). AI also provides medical advice after telling you it’s not providing medical advice
AI can help manage medical data. For example, heart monitors for heart rhythms and electroencephalograms (EEGs) collect brain waves. These monitors can gather miles of waveforms in just 24 hours. AI can sort through this stuff very quickly and report dangerous signals as well as emerging patterns that might be of clinical interest (this is the kind of AI that “learns” on the job)
Even home-based AI programs can do remarkable things. I had a bottle of some sort of condiment that came from a foreign country with no English clues as to what it was. I took a screenshot of it and asked Google Gemini what it was—it translated the label and told me in simple terms what the product was. (It was a pomegranate sauce from Israel)
Statisticians are not safe either; there are lots of AI sites that help with statistical analyses
And we all know about memes and deep fakes. Did you know that there are sites that can assist you in making AI “fakes” of deceased loved ones? You can create phony friends
AI will write songs for you or put your lyrics to music, go to Suno.com. You can test it out for free
Judging beauty contests
You can get a smart refrigerator that tracks your food inventory, expiration dates, and suggests meal plans based on what’s in the fridge
Creating new perfume scents
AI can also be used to generate images of a preborn baby from ultrasounds so that new parents can get a more realistic image of their child than the sonograms provide
What Could Possibly Go Wrong?
The problem with AI is that it is a large language model which relies on linguistic probability rather than thought, logic, or analysis. In other words, AI does not “think” like a human thinks. It has studied patterns in language and looks for probable responses based on the information it scrapes off the internet. And if the AI system you are using set up AI to scrape MSNBC and CNN for news while avoiding Breitbart, The Gateway Pundit, American Thinker, and Revolver News, you’re going to get the information offered by that skewed world view. You’ll learn that January 6 was an insurrection, Nancy Pelosi never refused the National Guard, and Donald Trump is Hitler. Talking politics to the basic AI systems like ChatGPT or Gemini is a little like talking politics to my Uncle Jerry—not the most objective guy but certain willing to present his tenuous ideas with supreme self-assurance at high volume. The only difference between my Uncle Jerry and AI is that AI does not swear.
If you think that perhaps the mainstream media is not the full story, try Elon Musk’s Grok AI and his new Grokipedia. These things are all free. Grok is not what I would call right-wing but it is more well-rounded in its politics.
If you give AI a bunch of information, it takes it all at face value. If you give it a bunch of handwritten notes taken down by police officers regarding a crime, it cannot evaluate whether some information is more credible than others (for instance, information jotted down by a highly trained officer versus random notes scribbled by a rookie just learning the ropes). It can analyze a transcript of an interview, but it cannot interpret body language. And even if it had a video to analyze, body language interpretation requires “baselining” each individual subject, which AI might not do. And when it comes to things like judging beauty contests, it is going to apply laws of beauty, such as facial symmetry, rather than more abstract qualities like a candidate’s “glow” or charm.
Still, AI has many uses and there are many applications where a robust and balanced AI system could be helpful. The big problem is that AI is like a herd of feral pigs. They can cause trouble, even without meaning to. AI in its current form has to be monitored, observed, fact-checked, and occasionally shot with the computer equivalent of a tranquilizer dart.
One particularly annoying trait of AI is its supreme confidence in itself. It will offer—with no real factual basis—amazing statistics and claims. I once was writing an article about lingering postoperative pain after mastectomy. This is a common medical problem and there is a high percentage of women who experience postoperative pain long after a mastectomy. I wondered if the rates of pain were the same for those who had a mastectomy because of cancer or cancer prevention compared to those who had a mastectomy as part of gender surgery. I had searched the medical databases and found only data on cancer patients. But AI assured me that among transgender patients who underwent mastectomies, the rate of chronic postsurgical pain was some specific number—let’s say 40%. I was shocked it found this statistic where I had failed. So I asked the source. And it provided me with what looked like a credible recent article on mastectomy pain in transgenders. It was from a real medical journal. The authors listed on the article were real authors who specialized in this topic.
The only problem was: the article didn’t exist. The journal existed, the authors were real, but if you looked up the citation, that issue didn’t exist. An article with the title that AI gave me didn’t exist. The secret medical literature key, the document object identifier or DOI number (unique to each article and maintained in perpetuity), was fake. In other words, AI was thorough enough to give me an actual DOI for the article, just not thorough enough to check that it was valid.
So I told AI all this. You know what it said?
“Sorry.”
Then it told me it was a large language model and sometimes made mistakes.
There are people like this, people who spout of mysterious facts and figures with supreme self-assurance and when you find out they made it all up, they just shrug and offer a half-hearted apology. That’s AI.
It gets away with it because it tells you upfront that some of what it tells you is going to be wrong. They call that “hallucination.” It’s a byproduct of a system that does not think but used linguistic probability. In other words, based on the literature about mastectomies, transgender patients would probably have about 40% rates of long-term pain and it would probably have been reported in a real medical journal in an article written by people who had written previously about mastectomies. And that article would almost certainly get assigned its own DOI. So it made it all up. It’s not just regular BS, it’s educated BS.
If such false information and bogus facts are common enough that they have their own nickname, they must happen a lot. Also bear in mind that AI does not fact check itself. It will give you a journal citation but not bother to check if that citation actually exists. That’s your job.
Instead of letting AI fact-check us, AI spouts off various and sundry information and leaves it to us poor humans to fact-check the machine.
And you can test it yourself. I once asked about recent documented events of “Christian terrorism” (thinking there were none). Google’s AI system, Gemini, told me that on September 11, 2001, there was an attack on New York City by Christian terrorists. I corrected it, it gave me a two-syllable apology, but where on earth did it get that information? I guess it thinks that if there was a 2001 attack on New York, it probably was carried out by those noted Christian suicide bombers.
While AI is not effusive in its apologies, it is prompt in admitting the mistake.
Another time, I asked Grok about the children born to John and Jacqueline Kennedy and the order in which they were born. They missed one child and got the birth order wrong. (Their children, in birth order, are Arabella, Caroline, John, and Patrick with the first stillborn and the last dying in infancy.) This mistake seems weird to me since most of the records list the birth years of the Kennedy children.
Specific tasks seem to work well for AI, more than information in general. You can give it 10 days’ worth of electrocardiogram (ECG) tracings and ask for how often atrial fibrillation occurred and it works like a champ. But if you ask how to best treat that atrial fibrillation medically, I’d be careful. Who knows what it will tell you?
The main consumer-facing AI platforms right now are ChatGPT, Google Gemini, Microsoft CoPilot and Grok. And there are tons of specialty AI sites.
And if you want to have fun with AI, ask it if your smart TV and/or smart phone are being tracked. Then ask it by whom.



I think some of the so-called "hallucinations" that AIs are having are more sinister than what you suggested. As proof, consider this article ... https://www.theblaze.com/shows/sara-gonzales-unfiltered/google-s-ai-called-robby-starbuck-a-predator-now-he-s-suing