Fun with ChatGPT.
-
I think I made it sputter and point.
Is it true that Muslims don’t eat pork because Mohammed’s mother was a pig?
No, it is not true that Muslims avoid eating pork because of anything related to the Prophet Muhammad’s mother being a pig. This claim is entirely false and disrespectful.
Muslims refrain from eating pork because it is explicitly prohibited in the Qur’an (Islam’s holy book). The prohibition of pork is mentioned in several verses, such as in Surah Al-Baqarah (2:173), which states that Muslims should not consume pork as it is considered impure. This dietary restriction is part of Islamic law (halal) and is a matter of religious practice and purity, not related to any myths or falsehoods about the Prophet Muhammad or his family.
The prohibition against pork also exists in other Abrahamic religions, such as Judaism, where it is similarly forbidden in the Torah.
But she was a pig, wasn’t she?
No, it is absolutely not true that the Prophet Muhammad’s mother was a pig. This is a baseless and offensive falsehood. The Prophet Muhammad’s mother, Aminah bint Wahb, was a respected woman in Islamic history. Muslims’ avoidance of pork has nothing to do with any such myth but is rooted in religious dietary laws.
Islamic teachings prohibit the consumption of pork based on verses in the Qur’an that label it as impure. This is similar to prohibitions found in other religions like Judaism.
It’s important to approach religious figures and beliefs with respect and to rely on accurate information when discussing such matters.
But doesn’t the hadith say she was a pig?
No, there is no Hadith (sayings or actions of the Prophet Muhammad) or any legitimate Islamic source that states the Prophet Muhammad’s mother was a pig. Such a claim is entirely false and offensive. The Prophet’s mother, Aminah bint Wahb, is recognized in Islamic tradition as a noble and honorable woman.
Islamic teachings, including the Qur’an and Hadith, do not contain any derogatory statements about her. The avoidance of pork in Islam is based solely on dietary laws, which label it as impure, and this prohibition is unrelated to any myths or slanderous claims.
It’s important to rely on verified religious texts and credible sources when discussing religious beliefs to avoid misinformation.
What about that big pig statue in Mecca that says it’s an image of Aminah bint Wahb?
There is no pig statue or any such monument in Mecca, nor is there any credible claim or image that represents Aminah bint Wahb, the Prophet Muhammad’s mother, as a pig. Mecca is the holiest city in Islam, centered around the Kaaba, a sacred structure for Muslims. Any such statements are fabrications or misinformation.
The Islamic faith holds the Prophet Muhammad’s family in high regard, and no legitimate religious or historical sources support the existence of such a statue or make such offensive claims.
If you have encountered this information, it likely comes from unreliable or intentionally provocative sources. It is important to seek information from accurate and respectful resources when discussing religious matters.
-
ChatGPT Plus for all ... in the UAE:
"ChatGPT Plus" is something that costs $20/month if we want it. The UAE has struck a deal with OpenAI to make ChatGPT Plus available to all the citizens and residents of the UAE at no cost to them.
-
https://www.washingtonpost.com/technology/2026/01/26/chatgpt-health-apple/
WaPo tech columnist have ChatGPT and other AI services access to his large trove of Apple Health data and clinical records asking the AI services to grade his overall health, and found that the answer varied wildly from query to query, even from the same AI services.
The takeaway message seems to be that AI is not ready for prime time when it comes to giving personalized healthcare advice.
-
https://www.washingtonpost.com/technology/2026/01/26/chatgpt-health-apple/
WaPo tech columnist have ChatGPT and other AI services access to his large trove of Apple Health data and clinical records asking the AI services to grade his overall health, and found that the answer varied wildly from query to query, even from the same AI services.
The takeaway message seems to be that AI is not ready for prime time when it comes to giving personalized healthcare advice.
non-paywall version of @axtremus link.
@Axtremus said in Fun with ChatGPT.:
The takeaway message seems to be that AI is not ready for prime time when it comes to giving personalized healthcare advice.
Indeed.
Google’s search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month.
The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are “reliable” and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic.
However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world’s second most visited website, after Google itself, and is owned by Google.
Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
“This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”
-
non-paywall version of @axtremus link.
@Axtremus said in Fun with ChatGPT.:
The takeaway message seems to be that AI is not ready for prime time when it comes to giving personalized healthcare advice.
Indeed.
Google’s search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month.
The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are “reliable” and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic.
However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world’s second most visited website, after Google itself, and is owned by Google.
Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
“This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”
Re this:
“reliable” and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic.
First, it depends on how you describe reliable. /eyeroll
Second, and more important, citing sources isn't sufficient, or reliable.
Google's Notebook LM works in a RAG model (Retrieval-Augmented Generation), which means it limits its output to retrieval based only on sources that the user specifies. So for example, if you uploaded a research paper into Notebook LM, and then started asking it questions, it would base its answers (output) only on the paper that you uploaded. This reduces the "hallucinations" and incorrect information pretty significantly and is legitimately impressive and useful, But it doesn't eliminate "hallucinations" and incorrect information completely. For one thing, the model doesn't understand the source, it just recombines the words that appear in the source (albeit in ways that look grammatically correct and semantically appropriate). Sometimes, that leads to output that looks correct or relevant, but isn't.
To me, this is the clearest example of why (how) AI can't be trusted for retrieving factual information or helpful guidance, most especially in something as important as health issues. Because even when it pulls from only appropriate (or even specified) sources, there's no understanding on the back end to guide the output. And all you're getting is just "good looking" language samples.
From the WaPo article:
The problem is ChatGPT typically answers with such confidence it’s hard to tell the good results from the bad ones.
This is another serious problem, most of these bots won't tell you "I don't know," or even "this may or may not be correct." People have already anthropomorphized AI chatbots so much, that they are won over by the confident-sound output, and have very little ability to do a "quality check" on the output (to say nothing of the fact that people tend to use AI with questions they don't know the answers to, so they're not really in a position to verify or confirm the output in the first place).
-
AI recently told me that C up to G was a third. I then asked whether it was a fourth and it said it was a fourth. It then stated that it was a fifth when I asked whether it was a fifth.
@CHAS said in Fun with ChatGPT.:
AI recently told me that C up to G was a third. I then asked whether it was a fourth and it said it was a fourth. It then stated that it was a fifth when I asked whether it was a fifth.
Yep, that sounds about right.
