12k AI-generated blog posts added in a single commit

(github.com)

134 points | by noslop 5 hours ago

44 comments

  • ConceitedCode 4 hours ago
    I suspect we'll address this by just going back to older ranking algorithms for search. We'll go back to the primary signal of good content being links from trusted sources.

    People gaming the content based algorithms will eventually cause their own downfall.

    • whstl 44 minutes ago
      This has been the status quo for more than a decade.

      In the past SEO blogspam was done by cheap freelancers, and there were several agencies selling the service.

      Experts identify blogspam quite easily, but laypeople eat it up and use as reference in conversations and to make decisions.

      Google has known about it, has been in contact with such agencies and companies, and has been refusing to do anything about it for the longest time.

    • iuvcaw 4 hours ago
      Ironically this post is doing wonders for its page rank, as people are linking to it in the comments
      • dang 3 hours ago

          <a href="https://oneuptime.com/blog" rel="nofollow">https://oneuptime.com/blog</a>
        
        https://news.ycombinator.com/item?id=47641348

        (By coincidence, see also https://news.ycombinator.com/item?id=47641829)

      • Retr0id 4 hours ago
        Now that we have better ML, maybe we could take "link sentiment" into account too.
        • oliveroot 3 hours ago
          I think they have something better - “link rank” which essentially takes into account the quality of backlink.

          I believe it is nuanced enough to have different rank per “topic”, or “keyword” etc. but admittedly just kinda guessing from the outside.

          The last time I tried to build something like this I realized it’s useless without first having a gigantic amount of data already crawled. When I started crawling I realized I would never catch Google. I think without Wikipedia the LLMs might have taken 10 more years to surpass them.

          • cyanydeez 49 minutes ago
            Crawlers would need to use backlinks but also rank vector similarity to ensure the linked content matches the linked intent. Some kind of rainbow shades of how relevent the link is to the linkee and reverse.
        • zahlman 3 hours ago
          I don't know how good it was, but sentiment analysis was definitely a thing pre-ChatGPT.
          • Retr0id 3 hours ago
            It was pretty basic though, and even a frontier LLM might struggle to infer that OP is a negative-sentiment link, without sufficient context.
      • Aurornis 2 hours ago
        rel=nofollow is used to signal that links should not be used by search crawlers for authority calculations on most sites with user-submitted content, including Hacker News.

        You basically have to use nofollow for comments otherwise your site becomes a big target for SEO link spam.

      • politelemon 3 hours ago
        I wonder if we ought to be flagging it then? There's already so much uninteresting AI slop observations.
    • bakugo 3 hours ago
      > I suspect we'll address this

      Who is "we"? Definitely not Google or any other major tech company, they're all actively encouraging this.

      > trusted sources.

      What trusted sources are there that haven't yet been taken over by AI?

      • dvfjsdhgfv 2 hours ago
        > Who is "we"? Definitely not Google or any other major tech company, they're all actively encouraging this.

        Google has been fighting aggressively to replace its search results with snippets, now generated by LLMs, to avoid sending traffic to other websites. If they continue, they will basically lead Google Search to a tipping point where a good competitor can take this market by storm. Microsoft also believed Windows is indestructible and now they have a rude awakening.

        • onion2k 2 hours ago
          The fact is what people really want from a search engine is a single perfect result that answers their query exactly. An LLM does the 'single result' bit, but it's dubious whether or not it's a perfect answer. Most of the time that's probably not very important so long as the answer satisfies the search enough that the user is happy.

          Google is trying to turn Search into that product e.g. the single answer to a given search. They could do that now with Gemini, but the ads in the results are what makes them money, and the backlash to embedding adverts into the output of Gemini would drive millions of people to OpenAI overnight. They have to do it slowly. Give it 5 years though, and search engine results pages will be a thing of the past.

          • dvfjsdhgfv 2 hours ago
            > Most of the time that's probably not very important

            Well... Maybe, but what's the point of an answer if you can't trust it? For ultra-fast answers for unimportant stuff I keep Cerebras tab open.

    • vohk 3 hours ago
      I don't have a ton of hope just yet because I think it's still an incentives problem rather than a technical one.

      I got tired of the increasing AI slop in my YouTube Music feed and switched to Deezer a few months ago. Since then, not a single AI artist I've been able to spot. If a relatively marginal player like that can manage it, why can't Spotify or YTM? My suspicion is simply that Deezer actually actually tries.

      It's the same problem with Google and search. Kagi and others have demonstrated that you can produce better results with an infinitesimal fraction of the budget, and Google is still plenty competent where they care to be. This won't start to get fixed until they see a financial incentive to do so.

      • VladVladikoff 3 hours ago
        Maybe it’s that AI music isn’t being spammed as hard at ‘platform I’ve never heard of before’?
        • vohk 3 hours ago
          That's likely a factor but Deezer reports that's it's 28% of their ingest as of last September. Being a smaller target doesn't account for all of it, or that openly AI "artists" are not being delisted from the larger platforms, nor are they providing ways to filter them out.

          https://newsroom-deezer.com/2025/09/28-fully-ai-generated-mu...

      • conception 3 hours ago
        Spotify 100% rather buy/produce AI music than pay artists. Also they demonetized most of their artists so if they can pump AI songs that sound enough like what you listen to and then stop promoting them they don’t have to pay anyone.
      • cyanydeez 44 minutes ago
        Its not a technical problem.

        Its a public good we refuse to turn into a government service for nebulous reasons.

    • ctoth 4 hours ago
      [dead]
  • eh_why_not 3 hours ago
    It's becoming much harder to determine on a daily basis what content is original, thought-out by a person, and trustworthy. Ironically, verifiably-old content is easier to trust now. Examples from recent personal experience:

    1) Some time ago I was searching for growing information about a specific and uncommonly-grown plant, and was led to a top-ranked website with long pages containing everything about it, including other plants. Surprised at how prolific the writing was, I spent more than an hour on the website, taking notes, etc. Every few paragraphs it would include an amazon affiliate link to something topical, which I thought was fair. Until I realized that the links near the bottom of the page were looking more random. Then it hit me, the website is all AI-generated, and the affiliate links themselves are also AI-chosen. And everything new I "learned" from that site was now useless because I had no way to know what was grounded in actual agricultural experience and what was hallucinated.

    2) Recently I did a youtube search for a book I had just finished reading, looking for some reviews. Came across a channel that was reading the book as new audio (i.e. not the original published audiobook). I thought it was a fan making it. The voice was beautiful, soothing, and natural with all kinds of relevant emotions correctly included. I started listening to the book again, until I noticed a consistent error in word ordering being made every few lines. Then it hit me! The channel even included one upload with a video recording of a seemingly-real person reading with that voice. Both the audio and video are AI-generated, but very hard to tell.

    3) Next to those videos, YT recommended many strange/new channels. One had the photo and the exact voice of a famous (and now very old) physicist, with tens of clickbaity titles about controversial topics in the domain. The only tell was that the voice was too vigorous and consistently energetic, while if you've listened to that physicist before, you know his cadence is slower. At first I thought maybe the channel is reading one of his books; no, the content itself was AI-generated, maybe based on his books. There was a lot of engagement, with many comments like "mind blown" and "learned so much today".

    Both #1 and #3 are harmful, because you think you're learning from a reliable source but you end up learning hallucinated nothings. #2 I didn't mind much, still enjoyed the new voice, and even preferred it over my original audible version.

    • predkambrij 2 hours ago
      I feel for you. I was looking for some wildlife events on Youtube, only to find that all of them were AI generated, trying to get views. I can only find content somehow reliable if I put filter for content before of AI era.
    • lconnell962 3 hours ago
      Something I've recently started seeing, maybe even an emerging #4 is AI generated translations. You could have someone very intelligent, making well written subject matter expertise. Or just someone who has valid thoughts they wish to express to the world in a language more of a common tongue than their own.

      Or on the other end you could have someone who wrote a sentance or two in their language and had some combination of AI generation and translation algorithm bloat it out.

      In both cases you will get something that can look right and well thought out or explained, but probably will have at least some of the AI slop signs present. I don't know what the solution is for this type given claims Google Translate has started to do this kind of translation for people. An AI translation is probably just as prone to hallucinations as any other AI, but it probably will look more natural to readers than a direct translation.

    • anal_reactor 2 hours ago
      You're making the classic mistake of looking for a trustworthy information source and then trusting it, instead of focusing on whether the information itself is trustworthy regardless of source. It's literally the same as my grandma saying "they said so on TV, therefore it must be true" while completely dismissing anything I've read on the internet because reasons.

      If you develop the skill of judging information by its merit rather than source, you won't mind AI-generated content as long as it's helpful.

      I talk to LLMs a lot. It's fucking great. Do I take everything they say at face value? No. But neither do I take at face value things that biological intelligence outputs.

      • rcxdude 1 minute ago
        You do ultimately need to trust some sources to some degree. You can try to cross-correlate, but that depends on some level of trustworthiness in the sources you are looking at. And of course for some things it's possible to try to verify directly yourself, but this is infeasible to do for everything you depend on.
      • xboxnolifes 43 minutes ago
        Information itself cannot be trustworthy. It can be right, it can be wrong, or it can be somewhere in between. Only a source can have trustworthiness, as it's a mixed measure of reputation and provable accuracy.

        You filter out known untrustworthy sources to not waste your time verifying false information 100x more than you need to. I know The Onion is a satire publication. I do not need to verify its claims. It's an intentionally untrustworthy source. I know that LLMs can hallucinate information, so I verify with a more trustworthy source. I cross-reference things random people say on the internet, because random people on the internet are not, individually, trustworthy sources of information.

        If a rocket engineer explains to me why Rocket A isn't flight ready, I'm more inclined to believe them than if a random commenter on the internet explains it to me. Because the one source is more trustworthy than another, and if I wanted to verify the claim myself I'd have to spend a lot of time studying rocket science.

      • predkambrij 2 hours ago
        Well, if not disclosed you could assume that somebody did due diligence for you, and could include sources. I don't even trust LLM even if all the information is included in the context window if I need reliable information. Trying to make money on slop is really bad manners. It's a scam, you can't call it otherwise. Btw, I like AI, it did a ton of value for me. We just need to find a way to live with it, without getting doomed in misinformation.
      • eh_why_not 2 hours ago
        No it's not the same as your grandma. The point is that it's now more expensive to find the correct information to learn from. You don't know it's an LLM ahead of time, and you may spend hours until you figure out something is off. Hence why reputable sources will become more valuable.

        > If you develop the skill of judging information by its merit rather than source..

        Did you read example #1? I'm not talking about some piece of code from an LLM that you can verify or some political opinion that you can take with a grain of salt, but information that you can only gain and/or judge through expertise:

        If you're not a physicist yourself, you can't judge "information by its merit" on specific physics topics, because you don't have a solid baseline.

        Similarly, in growing plants, each plant has its own peculiarities, and only people experienced in growing it can tell you anything useful - it's knowledge accumulated by trial and error. Not knowledge that your "great discerning mind" can assess on its own. Even a botanist can't tell you the ideal growing conditions of a plant that they've never studied before.

        • anal_reactor 43 minutes ago
          What if your physics book is wrong because knowledge has advanced since it was released - you can still find lots of publications and people with degrees blissfully unaware of Hawking Radiation. What if your botanical book is wrong because facts have changed since then - climate is changing and so does flora. What if your book is wrong because it's state-funded propaganda mixed with petty fights of a bunch of people with suits and strong opinions disguised as academia - a huge chunk of linguistics is dealing with exactly this issue.

          Again, you seem to miss the point that the idea of questioning new information, which was already useful to navigate life before LLMs, before television, before newspaper, before print, before clay tablets, even before speech itself, is equally applicable to LLMs as to any other form of communication. You just need to upgrade your strategies a little and that's it. Don't blow this out of proportion "somebody gasp lied to me on the internet!".

  • fn-mote 4 hours ago
    I thought somebody counted them… incredibly, the log message admits to committing 12,000 articles.

    I guess that means the log message was authored by AI as well. Figures.

    • shevy-java 4 hours ago
      I am kind of upset at github that we can not easily block AI content coming from their site.
      • nickvec 2 hours ago
        It’s simply not possible to enforce at scale. How can you definitively say whether something is AI or not?
  • jpdb 3 hours ago
    I've been seeing this company in ~all of my searches across various tech topics.

    They're absolutely dominating search results. The quality isn't terrible, but there's so much content that I can't trust them to be accurate.

  • arcza 4 hours ago
    So whatever OneUptime is, I now know it has zero integrity and is something I should avoid.
  • raincole 4 hours ago
    Serious question: What is this post about and why should we care? It's a repo with 35 stars. Is adding 12,000 posts in a single commit somehow technically difficult or significant?
    • bakugo 3 hours ago
      You should care because this website has a high ranking on Google and these 12000 posts will show up every time you search something programming related.
      • conception 3 hours ago
        Stop using Google. Kagi lets you block and prioritize sites.
        • radicality 3 hours ago
          About to go do that on Kagi for the linked site. Oh and also hit the “Report this site as AI generated”
        • bakugo 3 hours ago
          I have used Kagi, it's not a suitable replacement. It still struggles with relevance even compared to the garbage that is current-day Google, and is particularly bad at finding recent (less than a month or two old) information.
    • self_awareness 3 hours ago
      This post is added because it's so easy and to show that it's being done in real life. That we can't have nice things, because of mindless people like Nawaz Dhandala.
      • raincole 3 hours ago
        I'm quite sure in every passing second people are pumping more AI slop to the internet. I just don't see why this is something special (unless it's a well-known project among HN users that I'm not aware of.)
        • self_awareness 3 hours ago
          I'm also quite sure, but this is the proof, not hypothesis -- with git commits and all.
  • petterroea 3 hours ago
    This is why i never trust blog posts any more. If a company logo is attached its just SEO garbage
  • ThrowawayR2 4 hours ago
    If the dead Internet theory wasn't true before, it sure will be soon.
    • post-it 4 hours ago
      It's kinda exciting. The social media status quo has its upsides but a lot of downsides. I'm hopeful that the change will be good. We'll have to figure out a way to authenticate the people we're talking to, which will encourage tighter-knit communities.
      • dataviz1000 3 hours ago
        This will end with the only way to authenticate the people we're talking to is meeting them at the coffeeshop in the morning.
        • post-it 3 hours ago
          That might be okay. We'd lose a lot, obviously, but if you could 100% trust that the person you met at a coffee shop is real, and you could 99% trust that the person they met the day before is real, and you could 98% trust the person that person met is real, you've got three degrees of Kevin Bacon.
          • abathur 3 hours ago
            But can you trust that the things they say aren't just laundered AI blogspam?
            • post-it 3 hours ago
              Well I trust that the things my friends say aren't laundered AI blogspam. And if they trust the things their friends say, I can likely trust that too.
        • arctic-true 3 hours ago
          Until the humanoid robots gain the ability to process caffeine, then we’re all hosed.
        • Tepix 3 hours ago
          Did you forget about Blade Runner?
    • agilob 4 hours ago
      Dead Internet is a product now, why aren't you monetizing it yet?
    • MattGaiser 4 hours ago
      I would argue SEO should already be considered dead internet theory. Most of it is not intended to do anything but convince Google.

      A dentist buying freelance articles from a guy off Upwork is not intending to communicate anymore than this guy generating articles is.

      • shevy-java 4 hours ago
        SEO also showed that Google abuses its market position. One wonders why the USA promotes a de-facto monopoly here.
    • shevy-java 4 hours ago
      Only if we allow it to happen. It is time for the Empire of common man and woman to strike back against AI slop and companies that promote it - such as microslop.
      • senordevnyc 2 hours ago
        Common man and woman don’t care that much.
    • pilsetnieks 4 hours ago
      Great point! At this point the Dead Internet Theory isn't a conspiracy – it's a roadmap. It's worth noting he distinction between "authentic" and "synthetic" online spaces is eroding faster than most people realize – that's a genuinely important conversation to be have.

      /s

      • IsTom 2 hours ago
        > to be have.

        Meatbag spotted, get 'im boys.

      • nubg 4 hours ago
        Great imitation
        • pilsetnieks 4 hours ago
          The dead internet theory terrifies me. I don't think we're at the point where it's mostly dead but we're already way past the point where any discussion worth anything can be had on the internet itself. The problem is not that everything could be AI slop but that anything could. It simply takes the wind out the sails and makes one question what's even the point if anything could just be written by a clanker. Anything you write could just be screaming out into the void, affecting no one, and just maybe adding to the training corpus for the next generation of clankers.

          Just writing this made me question "what's the point" several times. If you or anyone replies cogently, I still won't have any idea if it's a person or a Chinese room.

          • shevy-java 4 hours ago
            > The dead internet theory terrifies me. I don't think we're at the point where it's mostly dead

            Well - I would say the internet is not totally dead yet, but we approach the point of it being very useless now. I remember the 1990s era and early 2000s - it was almost innocent compared to the total slop era we have now. Young people today don't even know that Google Search was useful at one point in time. If you use Google Search now, you get so much irrelevant crap output that it is really useless now.

            • thadt 3 hours ago
              Ironically, the reason I used Google the most then was because it indexed Usenet while so many other parts of the Internet offered by the other engines were "slop". My, how the turn tables.
  • CrzyLngPwd 3 hours ago
    There doesn't seem to be a workable plan for how to cope with the onslaught of AI output, and it's going to get much worse.

    The sentinel servers, meta/google/ms/etc. just seem to be largely ignoring it, or even supporting it.

    It's already nauseatingly common on all major platforms.

  • hirako2000 4 hours ago
    > All content must be original and not published anywhere else.

    Do what I say, not what I do.

  • TrackerFF 3 hours ago
    I've seen an increase in this "firehose" tactic among the passive-income folks, where the idea is to just saturate certain niches with AI-generated content, and collect some cents here and some cents there - in the hopes it will generate as much money as maintaining a single high-quality content channel.

    Don't know if they actually make any money doing it like that. A couple of weeks ago I stumbled across some content-creator that said he had hundreds of faceless YouTube channels, which was made possible due to AI tools.

    • iLoveOncall 3 hours ago
      My son and his friend made a YouTube channel that's just brainrot memes that, while they do it manually, could easily be fully automated by AI (or even without AI).

      They have 17 million views in 2 months.

      The strategy of spamming trash no-effort content definitely pays.

    • swores 3 hours ago
      For just $199, I'll sell you my PDF explaining exactly how to do this well enough to make WAY more than only "some cents here and some cents there". Special limited time offer for HN readers, reduced from my normal price of $1,489!

      P.S. Or get it free when buying my $499 "how to make money selling people how to make money guides" guide!

      (/s. I generally think HN comments should avoid jokes unless they're genuinely really cleverly funny, which this comment isn't - I only justified it to myself by the fact that the sort of people selling these trashy guides are the same people doing what you're talking about, and I feel they deserve mockery and shaming.)

  • wartywhoa23 4 hours ago
    AI is the stellar moment for all mediocrity and conmen.
  • chloeburbank 3 hours ago
    I have visited a blog on this site while searching for something. Suffice to say it was a very shoddy attempt at a blog and at this point I should just network block this site entirely
  • miyuru 4 hours ago
    Commit maker is here and have only posts slop here as well.

    https://news.ycombinator.com/submitted?id=ndhandala

    wonder when will he submit them here.

    • progbits 3 hours ago
      I think that account should be banned. Going further, the whole oneuptime.com domain should probably be blacklisted.
  • avian 3 hours ago
    Just this morning I opened up my RSS reader and found that it was flooded by weird, twisty prose exalting the virtues of online gambling. Since I follow a few blogs that post long form content I first thought this was satire or something, but after reading for a bit and seeing that the posts just never end my best guess was it's just AI slop indented to drive traffic to some gambling site - not clear which since there were not links. All posts came from a RSS feed of an apparently abandoned tech blog I was following that had the last legit post in 2020. My guess is the domain expired, a squatter bought it, saw a bunch of requests for the RSS feed and grabbed the opportunity. Although to what end I'm not sure.
    • camdenreslink 2 hours ago
      For every sign up to that gambling site from their affiliate link they make a few bucks (sometimes many few bucks).
      • fragmede 2 hours ago
        > not clear which since there were not links.

        How does that work tho?

  • StrLght 4 hours ago
    I am so glad DuckDuckGo allows blocking specific sites from the search. Just did this for a domain linked in this repository.
    • konradx 4 hours ago
      So now you don't get any hits from "Hacker News" ? :-)
  • Steppphennn 3 hours ago
    I don’t see how the author isn’t embarrassed. Maybe it’s just me having imposter syndrome or maybe I can self reflect, maybe. If he used AI to slop all those articles up doesn’t he know any developer can use AI to get that content through the IDE? He’s trying to game something with a tool that effectively killed off that game in the first place.
    • ThrowawayR2 3 hours ago
      Getting a check for advertising revenue overcomes all sorts of embarrassment.
    • moomoo11 3 hours ago
      I’m a south asian guy so I’ll just say it. I’m not surprised anymore when a lot of scammy/scummy behavior turns out to be done by a south asian.

      In sf too most of the scammers and scummy founders are south asian.

      It’s gross and honestly as a south asian doing something legit it sucks to see them just fulfilling a stereotype.

      These assholes are the same types responsible for why those societies are fucked up, being in SF most south asians I’ve met are from super wealthy families there that exploit people. Not surprising their new generation is exploiting too.

      Downvote me if that upsets you but someone’s gotta call it out.

  • setnone 3 hours ago
    i guess 11K won't do it and 13K is just way too much
  • MattGaiser 4 hours ago
    One of the issues is that the purpose of business internet writing is not to be read, but to be ranked well.
    • thm 4 hours ago
      By now, Google is smart enough to not even index this garbage.
      • AndroTux 2 hours ago
        I wish that were true.
      • henry2023 3 hours ago
        Search "Redix for Redis connections in Elixir". This Blog slop is second result.

        Google encourages this.

    • bakugo 4 hours ago
      I think the bigger issue is that the percentage of internet writing that can be classified as "business writing" is growing significantly, now that the effort required to produce it is literally zero.

      Overall, it feels like no matter where you go on the internet, it's impossible to dodge content that exists primarily for the purpose of extracting money from the reader in some way. SEO spam blogs, AI startups shilling their latest product, AI generated stories posted to reddit that casually slip in a mention of how the supposed author has recently won money on a gambling website. It's all the same thing, really.

  • srhyne 3 hours ago
    I’ve naturally landed a handful of their posts recently through search. I was impressed with the quality.

    Interesting to see this after the fact.

  • whycombinetor 3 hours ago
    If it's between a human or an AI copywriting SEO slop, I'm happy to see an AI take that job. SEO content marketing is so painful to read once you realize you're reading it, and I have to imagine it's as painful to write if you're a technically talented writer.
    • swores 3 hours ago
      I agree with you about the majority of "SEO content marketing", but a small minority of it is done by companies who genuinely care about doing good content, that doesn't only act as lazy SEO benefit but also as good marketing for people who read it.

      It's a lot harder / more expensive to produce, as it needs (at least before AI, and I guess still to some extent even using AI for now) to be written by someone on the team who genuinely understands the company's technology/product/whatever well enough to educate other people about it in an interesting way, rather than it being written by low wage SEO writers who just need a list of keywords to include in the drivel that is the sort of content you're talking about. So it makes sense that most companies go with the cheap option, but it's always nice to come across ones who produce actual interesting articles.

      (It's what I've always opted for when I've overseen marketing budgets, and I think the ROI is usually worth it since balancing the extra cost is the fact that the benefits go from just SEO, to SEO + word of mouth of people sharing the interesting article they read, and the awareness of the brand that comes with it. So I recommend anyone who normally chooses lazy, low quality content for SEO to consider the upgrade!)

  • alin23 2 hours ago
    They even have a scrolling 5-star reviews section, clearly generated: https://oneuptime.com/#reviews-title

    https://github.com/OneUptime/oneuptime/commit/538e40c4ae724e...

    https://github.com/OneUptime/oneuptime/commit/2bc585df20e6bb...

    You can fabricate a professional business image in a few days with AI now. It's going to be hard to build an honest brand when everyone is going to point and say "vibe coded slop" because of examples like this website.

    I'm already seeing such comments whenever someone posts an app on /r/macapps and it's really discouraging for beginners. If I would have met that resistance and amount of mean comments when I launched Lunar, I would have probably never put in that amount of effort.

    • noslop 2 hours ago
      "This enhancement improves the user experience by showcasing positive feedback from customers"

      you can't make this up

  • ieie3366 4 hours ago
    Ironically due to slop I feel like we are regressing as a civilization

    2020, want to know how to use Redix for Redis connections in Elixir? Google it and the results were most likely high quality, written by senior engineers who knew what they were doing

    Today google that, and it will be endless amounts of slop

    • elcapitan 1 hour ago
      For some searches I've started to limit the date range to pre 2023. That drastically improves search results (DDG, but I imagine Google as well). As long as you're looking for more long term information/posts ofc.
    • bakugo 3 hours ago
      > Redix for Redis connections in Elixir

      I googled this exact sentence, and the third result was a link to the blog this post is about.

      Grim.

      • progbits 3 hours ago
        I'm guessing there is a bit of a feedback loop now since people try this, search for the slopsite and click it, boosting it higher. For me it was top result (in incognito, not personalized).

        Two things you can do:

        - Navigate back and open another link. This signal is used to downrank for given query (google assumes the site did no provide satisfying answer)

        - Explicitly provide result feedback. Unfortunately there isn't a category for "this is slop" but "inaccurate" works.

    • johnbarron 4 hours ago
      >> Ironically due to slop I feel like we are regressing as a civilization

      Well after 50 years we cant reproduce what Apollo did, and I would doubt current students of the same age would handle a 1912 Eight Grade Examination: https://www.bullittcountyhistory.com/bchistory/schoolexam191...

      • post-it 3 hours ago
        Survivor bias.

        - Apollo had a significantly higher accepted risk. Apollo 1 or 13 would be untenable today.

        - The percent of 13-year olds that made it into and through eight grade was significantly smaller in 1912. Your average poor farming kid did not go to eighth grade.

  • gib444 4 hours ago
    "Showing 1 - 25 of 45488 posts"

    I miss the days when we could assume that's just a pagination code bug

    • vova_hn2 4 hours ago
      It's "Showing 1 - 25 of 58891 posts" now. HN tells me that your comment was posted 6 minutes ago, which gives us approximately 37.23 posts/second rate.
  • nelsonfigueroa 2 hours ago
    Well, at least they're not exactly hiding it.
  • Topfi 3 hours ago
    I know there is a lot of valid criticism of GitHubs poor performance when scrolling, but in this case I think we can let them off the hook.

    I'll just leave this here: https://developers.google.com/search/help/report-quality-iss...

  • schmookeeg 3 hours ago
    We are all quickly becoming allergic to AI writing.

    To fool us into thinking writing is not AI generated, we will create "human-ifying" filters to the LLM. This will introduce common keystroke, grammar, and spelling issues that surely no automation would ever create on its own.

    Soon the writing most vaunted and trusted will be the writing that appears written by a 4 year old with a crayon.

    Sigh.

  • WJW 4 hours ago
    Github only reports 5012 changed files though.
  • r_lee 4 hours ago
    I've seen this blog slop on Google for the last month or so, no action taken whatsoever. it's mostly bullshit or regurgitated info from docs.

    like Google or their Search team really doesn't seem to care at all. all of a sudden a random blog website just happens to rank first page on every topic

    • tempest_ 4 hours ago
      Google is not incentivized to show you good results. You don't pay them, advertisers do and that is who they are working for.

      Their job is provide you just enough "results" that you don't or cant go any where else.

      No more, no less.

    • gibsonsmog 4 hours ago
      Louis Rossman recently posted a video (https://www.youtube.com/watch?v=II2QF9JwtLc) where he had Gemini replace his 10+ year carefully curated content with AI slop and he instantly shot (back) up to the top of the rankings. They're very clearly favoring their own generated generic content rather than any sort of organic, well written or well informed entries. Shame.
      • emsign 4 hours ago
        There's AI features and tips in Youtube's Creator Studio, they are encouraging creators to use AI tools. Makes sense that they also then reward videos that make use of it. That's how these platforms nudge people into products and behavior that they want to bring to market.
    • masfuerte 3 hours ago
      Google used to prioritise search quality. About six years ago they decided to enshittify. Slop with more adverts is promoted over quality with fewer adverts. This isn't speculation. It came out in emails released as part of antitrust discovery.

      To reiterate: Google search is shit now because they want it to be.

    • fg137 3 hours ago
      Sounds like a good argument for using Kagi.
  • sigmonsays 4 hours ago
    when AI starts training itself accidentally on AI generated content, we all lose...
  • username223 4 hours ago
    [GitHub] platform activity is surging. — https://twitter.com/kdaigle/status/2040164759836778878
  • tadfisher 4 hours ago
    > Showing 1-25 of 58891 posts

    I have to imagine that one quality post worth reading would be linked in multiple places, thus would beat tens of thousands of slop articles for SEO purposes?

    • Retr0id 4 hours ago
      You'd think, but very low quality AI-generated content regularly makes it to the HN front page, so it's just a numbers game.
    • sgbeal 4 hours ago
      > I have to imagine that one quality post worth reading ...

      As the Berliners say:

      "Die Hoffnung stirbt zuletzt"

      or:

      "Hope is the last thing to die" (or "hope dies last" if one prefers a literal translation)

  • hoppp 3 hours ago
    What is this monstrosity, cmon.

    Why would anyone read AI generated blog posts when I can just ask AI for what I need already

    For gaming SEO this is still bad, no backlinks.

  • troupo 3 hours ago
    Ironic, considering the README:

    --- start quote ---

    These blog posts are written by the OneUptime team and open source contributors. We write about our experiences, our learnings, and our thoughts on the world of software development, Kubernetes, Ceph, SRE, DevOps, Cloud and more. We hope you find our posts helpful and insightful.

    --- end quote ---

  • nunez 2 hours ago
    Welcome to the slop age!
  • nicbvs 2 hours ago
    Trying to hide all their CVE behind AI slop
  • cebert 4 hours ago
    What is the point of this?
  • antiloper 4 hours ago
    "Nawaz Dhandala"
    • self_awareness 3 hours ago
      I nominate Nawaz Dhandala as "the king of AI slop"
      • sph 2 hours ago
        He's just an idiot doing it in public, because there are people generating hundreds of posts a day for years now without committing it on github under their real name.
  • ugiox 4 hours ago
    Now we know why GitHub has a hard time with stability and reliability. Because of this AI slop BS inflicted on us by the Silicon Valley tech bros and all their followers.
  • socialvideogen 2 hours ago
    [dead]
  • LorenzoBloedow 4 hours ago
    [dead]
  • cachius 4 hours ago
    At which URL(s) are the blog posts visible?
    • diehunde 4 hours ago
      • amarant 3 hours ago
        I like the topics! "Grpc-native microservices" is a wonderful piece of nonsense!
      • djoldman 4 hours ago
        "Showing 1 - 25 of 58891 posts"

        Seems to check out.

        • progbits 2 hours ago
          It's actually hilariously bad.

          If you go to the last (2356th!) page, you will see eight posts from 2023 and 2024, mostly few months apart. (But even none of those are good)

          Then in 2025 @nawazdhandala starts going wild with 22 articles on January 6th. And from that point on it's basically just all him and it keeps accelerating.

    • wofo 3 hours ago
      It would be better not to post urls to the blog, to prevent it from getting links and get even higher in search results...
    • maxbond 4 hours ago
      https://oneuptime.com/blog

      Scroll down a little and you'll see a huge block of posts dated March 31st