IgniteTech CEO on AI
-
This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says he’d do it again
https://finance.yahoo.com/news/ceo-laid-off-nearly-80-185033733.html
-
That headline is rather misleading:
replacing nearly 80% of staff within a year, according to headcount figures reviewed by Fortune.
In the months since, Vaughan told Fortune in an early 2026 statement, the company has only kept growing its headcount, recruiting globally for AI Innovation Specialists across every function, from marketing to sales to finance to engineering to support.
-
I didn't find it misleading. He did fire 80 percent of his staff - that's the headline. In the body of the article they say that he replaced those folks, and then added some.
I could see if you thought the article was about one of those companies who fired people and replaced them with AI, that it could be confusing.
-
I didn't find it misleading. He did fire 80 percent of his staff - that's the headline. In the body of the article they say that he replaced those folks, and then added some.
I could see if you thought the article was about one of those companies who fired people and replaced them with AI, that it could be confusing.
@wtg said in IgniteTech CEO on AI:
one of those companies who fired people and replaced them with AI
That headline gave me this exact impression.
Yes, of course when you read the article, it's clear, but I think the headline should have said "replaced" not "fired."
Certainly, for those who lost their jobs, it's awful and that others were hired after them is not going to provide any comfort.
But in the discussion of whether AI will just take jobs, or also create new ones, the distinction matters a great deal.
-
I didn't take it as a criticism.
I agree the headline is click bait-ish, but for me that's pretty much a given these days and I just plow through and read the article to get the real meaning and to see if there is anything of (factual) value.
I've given up the fight for well-written journalism.
-
I didn't take it as a criticism.
I agree the headline is click bait-ish, but for me that's pretty much a given these days and I just plow through and read the article to get the real meaning and to see if there is anything of (factual) value.
I've given up the fight for well-written journalism.
-
I use Gemini nearly every day. It's great for routine coding issues, emphasis on routine. It can find syntax errors right away and rewrite the code error-free with a handy copy feature. What it can't do is suggest an approach to writing code that has any sort of wrinkle in it at all. It's also super bad on suggesting how to do small things (generic formatting of an entire report, troubleshooting an error, etc.) on widely available software like Excel, Cognos, etc. I'd say my hit rate is about 50% in those scenarios, when most of the issues are suggestions to deploy features that are unavailable in our particular instance. If you point that out (I don't have the "beautify my report" feature), it suggests some other crazy roundabout way that doesn't work, then another, then by about the 4th iteration it's back to suggesting the first thing again, which didn't work. The big problem is that it almost rarely will say "I don't know"
-
Here's Gemini's description of an AI hallucination. They're real, and not rare.
An AI hallucination occurs when a Large Language Model (LLM)—like the one I am—generates information that is factually incorrect, nonsensical, or entirely fabricated, yet presents it with high confidence.Because these models are designed to be fluent and helpful, they often "fill in the blanks" when they lack specific data, creating responses that sound plausible but have no basis in reality.
