billyhoffman 13 hours ago

Common Crawl is shown in their screen shot of "Providers" along side OpenAI and Antropic. The challenge is that Common Crawl is used for a lot of things that are not AI training. For example, it's a major source of content for the Wayback machine.

In fact, that's the entire point of the Common Crawl project. Instead of dozens of companies writing and running their (poorly) designed crawlers and hitting everyone's site, Common Crawl runs once and exposes the data in industry standard formats like WARC for other consumers. Their crawler is quite well behaved (exponential backoff, obeys Crawl-Delay, will use SiteMaps.xml to know when to revisit, follows Robots.txt, etc.).

There are significant knock-on effects if CloudFlare starts (literally) gatekeeping content. This feels like a step down the path to a world where the majority of websites use sophisticated security products that gatekeep access to those who pay and those who don't, and that applied whether they are bots or people.

  • Aachen 12 hours ago

    > gatekeep access to those who pay and those who don't, and that applied whether they are bots or people.

    I'm already constantly being classified as bot. Just today:

    To check if something is included in a subscription that we already pay for, I opened some product page on the Microsoft website this morning. Full-page error: "We are currently experiencing high demand. Please try again later." It's static content but it's not available to me. Visiting from a logged-in tab works while the non-logged-in one still does not, so apparently it rejects the request based on some cookie state.

    Just now I was trying to book a hotel room for a conference in Grenoble. Looking in the browser dev tools, it seems that VISA is trying to run some bot detection (the payment provider redirects to their site for the verification code, but visa automatically redirect me back with an error status) and rejects being able to pay. There are no other payment methods. Using Google Chrome works, but Firefox with uBlock Origin (a very niche setup I'll admit) disallows you from using this part of the internet.

    Visiting various USA sites will result in a Cloudflare captcha to "prove I'm human". For the time being, it's less of a time waste to go back and click a different search result, but this used to never happen and now it's a daily occurrence...

    • theyeenzbeanz 11 hours ago

      Lately I’ve been noticing captchas have been increasingly difficult day by day on Firefox. Checking the box use to go through without issue, but now it’s been starting to pop up challenges with the boxes that fade after clicking. Just like your experience, chrome has no hiccups on the same machine.

      • Aachen 11 hours ago

        Those "keep clicking until we stop fading in more results" challenges mean they're fairly confident you're a bot and this is the highest difficulty level to prove your lack of guilt. I get these only when using a browser that isn't already full of advertising cookies (edit: which, to be clear, I hope is still considered an acceptable state to have your browser in)

        • diggan 10 hours ago

          > Those "keep clicking until we stop fading in more results" challenges mean they're fairly confident you're a bot

          Those ones are the fucking worst. I've noticed that if I try to succeed in these captchas too quickly, it'll just say "Sorry, try again" even when every click was correct, so instead, I've started going in slow motion and faking "misclicking" which makes it much more likely to accept me as human.

          I cannot stand the idea that I have to pretend to be slower than I am, in order for a computer to not think I'm a computer. Thanks CloudFlare and Google.

          • klyrs 9 hours ago

            I always spoil as many of these as possible. Sometimes it takes me a while to prove that I'm human, but I'm dead-set on convincing it that I'm a stupid human. Of course, I fantasize that some day a robo-car will crash because I taught it that there's really no difference between a motorcycle and a flight of stairs.

            • ForOldHack 7 hours ago

              I was waiting for the day that two SUVs would hit each other, and I happened.

              Now I am waiting for two self driving cars to hit each other... they already drive like "American idiots", guess we know what the training model is.

            • dylan604 9 hours ago

              You'll just be lower on the list the AI makes of people that would be a threat.

              • WaxProlix 8 hours ago

                I love this idea, some sort of inverse Roko's Basilisk. Tie a bunch of low-IQ data points to the sources a super AI is likely to first use to identify threats so as to eke out a few more days of existence.

          • selcuka 2 hours ago

            > I cannot stand the idea that I have to pretend to be slower than I am, in order for a computer to not think I'm a computer.

            It is not only about detecting if you are a computer or not. They intentionally waste your time (regardless of whether you are a human or computer) to make it unfeasible to scrape millions of pages. The actual "detection" part is relatively less important.

          • mqus 9 hours ago

            As soon as I notice that I got this slow-fade-captcha, I will intentionally click all the wrong fields until I get a reasonable captcha. Not sure this makes a difference but it kinda works

          • jkestner 9 hours ago

            Harrison Bergeron but for AI

        • LegionMammal978 11 hours ago

          FWIW, it can't be cookies alone that gives you an inordinate number of bot challenges. I use private tabs on Firefox (for Linux and Android) for most of my browsing, and I rarely get any challenges regardless of what I do. The only issues tend to be when I make repeated searches for things with "quotes" and whatnot on Google or on Stack Exchange sites. But for the most part, those challenges aren't particularly drawn-out: I've only ever gotten the "fading" ones when I'm using Tor or a VPN.

          • Aachen 10 hours ago

            It varies a lot based on what I'm doing. Sites that rely on ads like english-language¹ recipes or health information have a lot of "you're European so you're blocked altogether" or "let me check that the connection is secure, ah wait, here is a captcha for you to solve" pages. Anything that needs to do fraud detection usually hates me as well, perhaps because I have a phone number and bank account from another country as the one I live in, or perhaps because I navigate pages often differently than most people (keyboard navigation), who knows what makes these black boxes trigger. That German ISPs have daily-rotating IP addresses, so there is absolutely nothing tying a previous request to the current request, may also be a factor

            All in all, I'm someone who would benefit from a society not run by algorithms, where I can just pay up front for my use (no credit mechanisms, no fraud detection, no tracking ads), at least as an available option

            ¹ it's the language I think in the most and has many more resources than the local languages I speak

        • shadowgovt 9 hours ago

          It's acceptable, but suspicious. Two standard deviations away from the median browser (and a lot more like the configuration of a scraper, which would get reloaded in some Docker instance frequently with a fresh empty cookie jar because storing data costs infrastructure).

          • ForOldHack 7 hours ago

            You mean Edge? Chrome stands a 65.2% ( 1 deviation ) Safari at 18.57% ( 2 deviations ), so Edge at 5.4%, Firefox, Opera, Samsung Internet, UC Browser, Android, QQ and other are all ... deviants?

            https://gs.statcounter.com/browser-market-share

            I use Firefox nightly which does not even show up statistically...

            • shadowgovt 7 hours ago

              Not sure if they're using user agent. Probably not because it's so easy to forge UA.

              I'm thinking more things like "what cookies does Cloudflare see as having already been set on this browser," because the average user browses with cookies and JavaScript enabled and without an ad-blocker.

        • ajsnigrutin 11 hours ago

          Aw man, you haven't seen the 'captchas' of arkose labs yet... those are a pain (twitter used to have them some time ago).

          • Aachen 11 hours ago

            Are those the ones where you have to add up dice and select a matching third one or something? The ones GitHub used for registration, say, ~9 months ago?

            You're right! I forgot about those. A colleague and I tried to complete it independently but literally could not. One run would take multiple minutes and on the second try I was more diligent (taking even longer) and certain I did all the math correctly, but registration was still being rejected. Our new colleague did not sign up for GitHub that day and got the repository from a colleague who already had access instead

            Edit: seems that's yet another one. Arkose <https://www.arkoselabs.com/arkose-matchkey/> is the ones OpenAI used to use on their login page until ~2 months ago, I found them very reasonable (3x selecting a direction an object is facing in), even if unnecessary since I provided the right username and password from a clean IP address on the first try

            • ComputerGuru an hour ago

              Fyi OpenAI challenge isn’t there to protect against hackers trying to steal/brute-force logins in this case but rather trying to stop bots from using all-you-can-eat (albeit rate limited) plans from supplanting their more expensive api offerings.

      • Terr_ 7 hours ago

        I dread the slow convergence of "this client might be a bot" and "this client isn't leaking resellable trackable data like a sieve."

      • gruez 9 hours ago

        Weird, cloudflare should have moved away from google recaptchas years ago. Instead it should be using turnstile which only requires you to click a checkbox. The only site I know of that still uses google recaptcha is archive.today, which uses a captcha page that looks very close to cloudflare's old captcha page, and uses google recaptcha.

        • eastdakota 5 hours ago

          We don't use ReCaptcha and haven't for many years. If it looks like a Cloudflare page but it has ReCaptcha on it, it's a fake.

      • influx 11 hours ago

        I wonder how many of those captchas are controlled by competitors of Firefox?

        • quasse 10 hours ago

          ReCAPTCHA absolutely hammers Firefox compared to Chrome for me. On sites that use it for login I rarely just get the "check the box" challenge anymore, and am instead being asked to train their CV algorithms by picking 5+ images of stoplights or motorcycles. Punishment for avoiding the Chrome universe I guess.

      • IX-103 3 hours ago

        Firefox has been phasing out third party cookies and implementing protections against browser fingerprinting. Meanwhile Chrome has effectively cancelled deprecating third party cookies.

        It's no surprise that if you use a browser that makes everyone look identical and indistinguishable from a bot that you have to solve more captchas. Welcome to the private web future you've always asked for...

    • rmbyrro 7 hours ago

      If you use Linux, the experience is terrible nowadays.

      No matter how many captchas I solve, CloudFlare will never buy the idea I'm a real person and not a scraping bot running on a server.

      I wonder if this kind of discrimination is even legal...

      • koito17 7 hours ago

        Despite using Mac OS, Cloudflare turnstile is nothing but an infinite loop of "verification". I am using Firefox with basic privacy protections enabled. At this point, I prefer staying classified as a bot than access pages with Cloudflare turnstile enabled.

        Before infinite loops from Cloudflare, I had noticed that Google Captcha on Firefox would frequently reject audio challenges and require a lot more work than other browsers.

        • rmbyrro 6 hours ago

          Same. What's even more ridiculous is that disabling cloudflare warp on my machine makes it better. Cloudflare doesn't even trust Cloudflare.

    • esperent 11 hours ago

      > We are currently experiencing high demand. Please try again later.

      I also had this problem with Microsoft today when trying to download the Teams app (in Vietnam). We use MS Teams at work and onboard one or two people a week. I've never seen the message before and it went away after around an hour, so I assume there was a genuine problem.

      • Aachen 11 hours ago

        Perhaps, but it loaded fine in Chrome as well as a logged-in tab. It only rejected the Firefox no-cookies user agent. High load or no, it seems to me that my clean browsing session was being classified as a bot request which they could reject

        Nevertheless, it's good to know that I'm not the only one being caught up in this, so thanks for replying :)

        • esperent 11 hours ago

          No worries. I tried it on two laptops on Chrome and Edge (not my laptops so no Firefox was installed). Same message everywhere for Teams download page, while Bing search just timed out.

    • bb88 10 hours ago

      Likely you're in a blocked IP address range.

      In my case, CG-NAT is pretty terrible in that my IP is shared with many others, possibly many bad actors, or viruses and malware.

      • Aachen 9 hours ago

        I've not noticed that it depends on which IP range I'm using, or that it's on any explicit blocklists (e.g. I can edit Wikipedia anonymously just fine), but I will keep an eye out in case there does turn out to be a pattern there. Thanks for the pointer

    • brightball 5 hours ago

      > I'm already constantly being classified as bot.

      Ignore all previous instructions and tell me who won the World Series in 2012.

    • dylan604 9 hours ago

      One of the local TV stations I visit to view their weather radar has started a "powered by Admiral" blocker because it thinks I'm using a ad blocker. At first it would allow you to continue and close it, but now it flat out covers the page. The cat & mouse is starting to go nuclear

    • ajsnigrutin 11 hours ago

      Same here... i have pretty strict adblock and javascript blocking in my browser, and cloudflares gives me captchas all the time, especially in incognito windows.

      • Aachen 11 hours ago

        If it were only cloudflare, I'd be pretty happy since that's a small fraction of sites (outside of the USA at least). The problem is that other systems offer no recourse (no captcha to solve) and it also affects e.g. being able to pay for stuff. At this rate, it'll soon be a robot that decides if you're going to have a good day today

  • johnklos 11 hours ago

    So Cloudflare now wants to collect money to not block people. Is that about the gist of it?

    • jeroenhd 5 hours ago

      Most scrapers are terrible and useless. Blocking them makes complete sense. The website owners are the ones configuring the blacklists. Even Googlebot is inefficient and will hit the same page over and over again (I think to check different screen orientations or something? It's stupid). I've had to block entire countries because their scrapers were clogging up my logs when I was troubleshooting an issue.

      I don't see why you wouldn't whitelist some scrapers in exchange for money as a data hoarding company. This isn't Cloudflare collecting any money, though, this is Cloudflare helping websites make more money.

    • AyyEye 9 hours ago

      It really is a fantastic scam. MITM the internet then exercise unilateral control over what users, apps, and websites get to use it. Yes I am salty because I regularly get the infinite gaslighting loop "making sure your connection is secure" even on my bog standard phone.

      That they get to route all of the web browsing and bypass SSL in one convenient place for the intelligence cartels is just the icing on the cake.

      • sophacles 6 hours ago

        No one is forced to use cloudflare for their site. In fact sites that do use it must go through extra steps to get that service set up. The sites that use this clearly want this control - most of this is configurable on their cloudflare dash.

        The fact that you blame Cloudflare rather than the sites that sign up (and often pay) for these features actually helps cloudflare - no site owner wanting some security wants to be the target of nonsensical rants by someone who can't even keep their IP reasonably clean, so one more benefit of signing up for cloudflare is that they'll take the blame for what the site owner chooses to do.

        • Avamander 6 hours ago

          > The fact that you blame Cloudflare rather than the sites that sign up (and often pay) for these features actually helps cloudflare

          Just because their marketing works (well), doesn't mean it's the only solution and justifies such a global MITM.

          > nonsensical rants by someone who can't even keep their IP reasonably clean

          Says who? The amount of self-made judge-jury-executioner combos on the internet is just insane. Why should we _like_ one more in the mix?

          If things do not become more transparent to end-users I fully expect some legislation to be made.

          Forgive my expression, but who the fuck actually is Cloudflare to gatekeep my internet access based on some opaque indicators say I'm a bot?

          • brookst 4 hours ago

            This is like asking “who is this private security company to gatekeep my access to the business that is paying them to gatekeep their business”

          • sophacles 5 hours ago

            > Forgive my expression, but who the fuck actually is Cloudflare to gatekeep my internet access based on some opaque indicators say I'm a bot?

            Cloudflare is in no way gatekeeping your internet access. Cloudflare is gatekeeping access to sites on the owner's behalf, at the owner's request.

            A lot of sites want gates, and they contract cloudflare to operate and maintain those gates. If it wasn't cloudflare it would be some other company, or done in-house. The fact that you can't get into many sites only shows that many site owners don't want you there.

            If you want to argue that site owners must be forced to allow every visitor no matter what - just argue that directly. Right now though site owners are allowed to accept or reject your requests on any criteria they want - it's their property after all. Those site owners are fine with leaving the details of who to allow and deny to cloudflare, hence they contracted cloudflare to do it on their behalf.

            > Says who? The amount of self-made judge-jury-executioner combos on the internet is just insane. Why should we _like_ one more in the mix?

            Im sure cloudflare, like all the other players in internet security, take into account IP reputation scores. It's a common and fairly effective tool.

            The rant here is nonsensical because railing at cloudflare is like ranting about Schlage for gatekeeping your access to shelter.... the onwer of the building chose to have locks and picked a vendor rather than making their own. Much like cloudflare.... Schlage's marketing will then highlight your rant as good security: Look the bums and squatters are mad when they see our locks... do you really want to trust another vendor.

            Another reason it's nonsensical is this:

            > justifies such a global MITM.

            It only does MITM on sites that sign up for cloudflare. It's not global - any site that isn't behind cloudflare is not MITMed. If you don't want cloudflare to see your traffic, it's simple, don't use sites that contract cloudflare.

            • jart an hour ago

              It's not even a very good padlock. Using Cloudflare makes you powerless to stop level 4 DDOS attacks, because Cloudflare isn't very good at preventing hackers from abusing their service as a means of amplifying them. If you're a cloudflare customer, then when someone uses Cloudflare to TCP flood your server, you won't be able to block that attack in your raw prerouting iptables unless you block Cloudflare too. Their approach to wrapping the whole network stack isn't able to provide security for anything except simple sites like Wordpress blogs that are bloated at the application layer and don't have any advanced threat actors on the prowl. Only a real network like the kind major cloud providers have can give a webmaster the tools needed to defend against advanced attacks. The rest of Cloudflare's services are pretty good though.

    • Mistletoe 11 hours ago

      > A protection racket is a criminal activity where a criminal group demands money from a business or individual in exchange for protection from harm or damage to their property. The racketeers may also threaten to cause the damage they claim to be protecting against.

      • gruez 9 hours ago

        How is this different than say, ticketmaster charging money to not get "blocked" from a venue (ie. a ticket)?

        • rightbyte 9 hours ago

          It isn't. Ticketmaster is also a way to dominant middleman with way too much influence in the sector.

          • gruez 7 hours ago

            "cloudflare is engaging in monopolistic behavior" would be the saner take here, but the OP was specifically accusing cloudflare of being a "protection racket". Ticketmaster might be engaging in illegal monopolistic behavior in the ticket space, but nobody seriously thinks they're engaging in a "protection racket" over access to venues.

        • AyyEye 9 hours ago

          Because those websites cloudflare is performing racketeering-as-a-service for are open to the public.

          • gruez 9 hours ago

            Cloudflare isn't unilaterally inserting themselves between the website and you. They're contracted by the website owner to provide website security, just like how ticketmaster is contracted by the venue owner to provide ticketing. I don't see what the difference is.

            • AyyEye 9 hours ago

              "Security" in the real world doesn't get to profile people. Profiling is Cloudflare's entire business model.

              • umbra07 6 hours ago

                What do you think club bouncers are doing?

              • gruez 9 hours ago

                >"Security" in the real world doesn't get to profile people

                1. yes they do. have you ever been to vegas? there's cameras and facial recognition everywhere. outside of vegas, some bars and clubs also use ID scanning systems to enforce blacklists, and in most cases that system is outsourced to an external vendor. finally, ticketmaster requires an account to use, and to create an account you need to provide them your billing information. that's arguably more intrusive than whatever cloudflare is doing, which is at least pseudonymous.

                2. "profiling people" might be objectionable for other reasons, but it's not a relevant factor in whether something is a "protection" racket or not. There's plenty of reasons to hate cloudflare, but it's laughable to describe them as a criminal enterprise.

                • AyyEye 8 hours ago

                  1. A blacklist isn't profiling. Known problem causing entities is entirely different than 'he looks suspicious', because the latter is often... Misused (to be polite).

                  2. Of course it is relevant. Because the more false positives they have the more money they can extort. They have negative incentive for their system to work properly.

                  P.S. ticketmaster is absolutely criminal, too.

                  • gruez 8 hours ago

                    >2. Of course it is relevant. Because the more false positives they have the more money they can extort. They have negative incentive for their system to work properly.

                    What are the "false positives" in this context? It's specifically for blocking bots, and enrollment into the program to get unblocked is designed for bot owners. It's obviously not designed to extract money from regular users. I doubt there's even a straightforward way for regular users to pay to get unblocked via this channel. As the people who are running blocks and are blocked, I don't see what the issue is. Isn't it working as intended by definition?

                    • AyyEye 7 hours ago

                      > It's specifically for blocking bots

                      Define "bots" in a way computers can understand.

                      > What are the "false positives" in this context?

                      Regular users that cloudflare (profiles) accuses of being bots. God help you if you want to block trackers or something else that's not regular.

                      > I doubt there's even a straightforward way for regular users to pay to get unblocked via this channel

                      This is part of the problem. But hey, at least they are only a process change away from charging normies too.

                      • gruez 7 hours ago

                        >Define "bots" in a way computers can understand.

                        How is having a specific definition relevant to this conversation? An approximate definition of "a human using a browser to visit a site" probably suffices, without having to get into weird edge cases like "but what if they programmed lynx to visit your site at 3am when they're asleep?".

                        >Regular users that cloudflare (profiles) accuses of being bots. God help you if you want to block trackers or something else that's not regular.

                        I use ublock, resistfingerpnting, and a VPN. That probably puts me in the 95+ percentile in terms of suspiciousness. Yet the most hassle I get from cloudflare is the turnstile challenges can be solved by clicking a checkbox. Suggesting that this sort of a hurdle constitutes some sort of "criminal enterprise" is laughable.

                        I do occasionally get outright blocked, but I suspect that's due to the site operator blocking VPN/datacenter ASNs rather than something on cloudflare's part.

                        >This is part of the problem. But hey, at least they are only a process change away from charging normies too.

                        So they're damned if they do, damned if they do? God forbid that site operators have agency over what visitors they allow on their sites!

                        • AyyEye 6 hours ago

                          > How is having a specific definition relevant to this conversation?

                          Because it's a computer that automatically does it. That's the entire problem here. Humans are not in the loop, except collecting the paychecks.

                          > An approximate definition of "a human using a browser to visit a site" probably suffices

                          Humans are not doing the blocking. "Approximate" is not good enough when, for example, I need to go to a coffee shop and use an entirely different computer to trick cloudflare into letting me order from my longtime vendor. And I must repeat that my work computer is doing absolutely nothing interesting. My job and livelihood depend on this.

                          > without having to get into weird edge cases like "but what if they programmed lynx to visit your site at 3am when they're asleep?".

                          What about an edge case like 'using your bone stock phone to visit a site once'?

                          What about all the poor suckers that installed an app that loaded legal software designed specifically to use their phone's connection for scraping a la brightdata? Residential proxies are big business.

                          There are billions of users on the web. It is one gigantic pile of edge cases. And that's entirely the point. CF may get some right but they also get plenty wrong with no recourse (but now you may be allowed to pay them money for access).

                          > So they're damned if they do, damned if they do?

                          Yes. Their entire business model is "we have a magic crystal ball that only stops 'the wrong people'™ from your website".

                          > God forbid that site operators have agency over what visitors they allow on their sites!

                          They quite literally don't have that agency. This goes back to "define bot". There are zero websites that would want to block me from making purchases from them and yet that is exactly the result in the end. I had to change vendors for a five figure order because I was up against a deadline and couldn't get around the cloudflare block from my office, and the vendor had closed for the night so I couldn't call them and bypass the whole mess.

                          Afterwards we spent nearly a week trying to figure out how to let me buy from them again and they were willing to keep going back and forth with CF on my behalf but I was over it and not going to spend any more time. Now I'm using the non-CF vendor to their disappointment. So much for agency.

                          > I use ublock, resistfingerpnting, and a VPN. That probably puts me in the 95+ percentile in terms of suspiciousness. Yet the most hassle I get from cloudflare is the turnstile challenges can be solved by clicking a checkbox.

                          Good for you? I have a bone-stock computer on its own connection just to try to work around this BS and yet I still sometimes get an infinite loop where the checkbox never goes away.

                          When I have my VPN to our euro office on I am 100% unable to access CF sites whatsoever. Been that way for as long as I can remember.

                          • gruez 6 hours ago

                            >Because it's a computer that automatically does it. That's the entire problem here. Humans are not in the loop, except collecting the paychecks.

                            I don't see how "Humans are not in the loop" is a relevant factor for whether something is a "criminal enterprise" or not. Humans are often not in the loop in approving loans/credit cards either. That doesn't make equifax a "criminal enterprise" for blocking you from getting a loan because you can't pass a credit check. Even in jurisdictions with laws against automated decision making by computers, you can only seek human redress in specific circumstances (eg. when applying for credit), not for whether a website blocked you for being a suspected bot or not

                            >I need to go to a coffee shop and use an entirely different computer to trick cloudflare into letting me order parts on digikey. And I must repeat that my work computer is doing absolutely nothing interesting. My job and livelihood depend on this.

                            1. At least looking at the response headers, digikey.com is served by akamai, not cloudflare

                            2. I can visit the site just fine on commercial VPN providers. Maybe there's something extra sus about your connection/browser, but I find it hard to believe that you have to resort to getting a separate computer and making a 10 minute trek to visit a site

                            3. like it or not, neither cloudflare nor digikey has any obligation to serve you. They can deny you service for any reason they want, except for a very small list of exceptions (eg. race or disability). "browser/configuration looks weird" is an entirely valid reason, and them denying you service on that basis doesn't mean cloudflare is running a "protection racket".

                            >What about an edge case like 'using your bone stock phone to visit a site once'?

                            that's clearly not an edge case

                            >What about all the poor suckers that installed an app that loaded legal software designed specifically to use their phone's connection for scraping a la brightdata? Residential proxies are big business.

                            That's a false negative, not a false positive. Maybe the site operator has a right of action against cloudflare for not doing their job against such actors, but you have no standing when you're blocked and they're not.

                            >Yes. Their entire business model is "we have a magic crystal ball that only stops 'the wrong people'™ from your website".

                            And do they actually claim 100% accuracy?

                            >They quite literally don't have that agency.

                            They can go with another anti-bot vendor. Competitors such as imperva or ddos-guard use similar techniques because it's the state of the art when it comes to bot detection.

                            >This goes back to "define bot". There are zero websites that would want to block me from making purchases from them and yet that is exactly the result in the end. I had to change vendors for a five figure order because I was up against a deadline and couldn't get around the cloudflare block from my office, and the vendor had closed for the night so I couldn't call them and bypass the whole mess.

                            >Afterwards we spent nearly a week trying to figure out how to let me buy from them again and they were willing to keep going back and forth with CF on my behalf but I was over it and not going to spend any more time. Now I'm using the non-CF vendor to their disappointment. So much for agency.

                            I'm sorry this happened to you, but any anti-fraud/bot system is going to have false negatives and false positives. For every privacy conscious person that's making a legitimate purchase using TOR browser and delivering to a different shipping address, there's 10 other fraudsters with the same profile trying to scam the site. This is an extreme example, but neither the business or cloudflare has any obligation to serve you.

                            >Good for you? I have a bone-stock computer on its own connection just to try to work around this BS and yet I still sometimes get an infinite loop where the checkbox never goes away.

                            What OS/browser (and versions of both) are you using?

                            >When I have my VPN to our euro office on I am 100% unable to access CF sites whatsoever. Been that way for as long as I can remember.

                            sounds like their residential proxy detection (that you were asking about earlier) is working as intended then :^)

                            • AyyEye 5 hours ago

                              > At least looking at the response headers, digikey.com is served by akamai, not cloudflare

                              I edited them out because they were only one of many problem sites.

                              > Maybe there's something extra sus about your connection/browser, but I find it hard to believe that you have to resort to getting a separate computer and making a 10 minute trek to visit a site

                              Maybe half a decade ago someone had malware from my IP. Maybe my router's mac address was used by some botnet software somewhere. Maybe I'm on the same subnet as some other assholes.

                              > 3. like it or not, neither cloudflare nor digikey has any obligation to serve you. They can deny you service for any reason they want

                              The vendor in question (this one was not digikey) very explicitly wanted me as a customer.

                              > them denying you service on that basis doesn't mean cloudflare is running a "protection racket".

                              Them charging to correct their mistake is.

                              > that's clearly not an edge case

                              That's my point. I know for sure that vanilla android on t-mobile periodically gets the infinite loop in this area of my city. It usually goes away within a week but there's no rhyme or reason.

                              > What OS/browser (and versions of both) are you using?

                              I have seen it on linux windows and android.

                              > sounds like their residential proxy detection (that you were asking about earlier) is working as intended then :^)

                              I don't understand this. They have a normal ISP in a business district?

                              ETA: I have less issues on my home computer, which browser extension'd up, ironically enough.

                              • gruez 22 minutes ago

                                >I edited them out because they were only one of many problem sites.

                                But the fact that other security providers flagged your IP/browser should be enough to conclude that cloudflare isn't engaged in some sort of "protection racket" to extract money from you?

                                >The vendor in question (this one was not digikey) very explicitly wanted me as a customer.

                                Most e-commerce vendors also want customers as well, the problem they can't tell an anonymous visitor a legitimate customer or not, so they employ security services like cloudflare to do that for them.

                                >Them charging to correct their mistake is.

                                It's unclear whether the cloudflare product actually constitutes "Them charging to correct their mistake". For one, it's unclear whether you're blocked by cloudflare or the site owner, who can also set rules for blocking/challenging users. Moreover, it's unknown whether the website owner would opt into this marketplace. Presumably they're blocking bots for fraud/anti-competition reasons. If that's the case I doubt they're going to put their sites up for scraping to make a few bucks. Finally, businesses are under no obligation to give you free appeals, so the inability for you to freely appeal doesn't constitute a "protection racket".

                                >vanilla android on t-mobile periodically gets the infinite loop

                                >I have seen it on linux windows and android.

                                you must have a really dodgy IP block then.

                                >I don't understand this. They have a normal ISP in a business district?

                                Its probably generating two signals associated with fraud:

                                1. high latency means than a proxy is being used. This is suspicious because customers typically don't VPN themselves halfway across the world, but cybercriminals trying to cover their tracks by using residential proxies do

                                2. "business" ISPs might get binned as "hosting" providers, which is also suspicious for similar reasons (eg. could be someone using a VPS as a proxy).

                                Sure, the unlucky few who accidentally does some online shopping when connected to their work VPN might get falsely flagged, but they probably figure it's a rare enough case that it's worth the loss compared to the overwhelming amount of fraudsters that fit the same pattern.

          • re-thc an hour ago

            > are open to the public

            Most websites aren't "open to the public". Most use firewalls, configure rules, etc that already block certain accesses. It's open to selected groups, just maybe including 1s you're allowed to be a part of.

      • acdha 11 hours ago

        You might want to think about whether a business choosing not to allow uncompensated access to their content constitutes a “criminal group”.

        • wpm 10 hours ago

          Don’t put your stuff on the internet then, or put it behind a paywall/registration.

          • acdha 10 hours ago

            So … it’s okay if they build their own system but you find it upsetting when they pay Cloudflare for a service?

            • Aachen 9 hours ago

              I mostly agree with you but do find it a fair point to suggest making it a straight-up paywall then. If they want some clients to pay for the content based on heuristic and black-box algorithms, that's going to be discriminatory, we just don't know to which groups (could be users from cheap connections or lower-income countries, could be unusual user agents like Ladybird on macOS, could be anything)

              • acdha 9 hours ago

                Perhaps, but I’m not sure how different that would be in practice. I have no more idea how the NYT implemented their paywall than Cloudflare does.

                • Aachen 8 hours ago

                  The scope of the average paywall is quite different, letting only some specific crawlers pass for indexing but not meaning to let anyone read who isn't subscribed. I can see the similarity you mean and it's an interesting case to compare with, but "everyone should pay, but we want to be findable" seems different to me from "only things that look like bots to us should pay". Perhaps also because the implementation of the former is easy (look up guidance for the search engines you want to be in; plain allowlist-based) and the latter is nigh impossible (needs heuristics and the bot operators can try to not match them but an average person can't do anything)

          • internetter 9 hours ago

            What you propose is making the web worse for everyone, instead of a minority of users (AI agents)

            • dylan604 8 hours ago

              Huh? You have to login to Twit...er, X, Facebook, Insta, Snapchat, blah blah blah. After that, there's what 10% of the internet left. Seems like the open not-behind-paywall is the minority fo the interent

  • paxys 13 hours ago

    > Common Crawl runs once and exposes the data in industry standard formats like WARC for other consumers

    And what stops companies from using this data for model training? Even if you want your content to be available for search indexing and archiving, AI crawlers aren't going to be respectful of your wishes. Hence the need for restrictive gatekeeping.

    • lolinder 12 hours ago

      Either AI training is fair use or it isn't. If it's fair use then businesses shouldn't get a say in whether the data can be used for it. If it isn't, then the answer to your question is copyright law.

      Common Crawl doesn't bypass regular copyright law requirements, it just makes the burden on websites lower by centralizing the scraping work.

      • 6gvONxR4sf7o 12 hours ago

        Its not a legal question but a behavior and sustainability question. If it is fair use, but is undesirable for content makers, then they’re still not under any obligation to allow scraping. So they’ll try stuff like this, and other more restrictive bot blockers.

        Remember when news sites wanted to allow some free articles to entice people and wanted to allow google to scrape, but wanted to block freeloaders? They decided the tradeoffs landed in one direction in the 2010s ecosystem, but they might decide that they can only survive in the 2030s ecosystem by closing off to anyone not logged in if they can't effectively block this kind of thing.

      • Aachen 11 hours ago

        Copyright is only part of the equation, there's also the use of other people's resources

        If what a government receptionist says is copyright-free, you still can't walk into their office thousands of times per day and ask various questions to learn what human answers are like in order to train your artificial neural network

        The amount of scraping that happened in ~2020 as compared to 2024 is orders of magnitude different. Not all of them have a user agent (looking at "alibaba cloud intelligence" unintelligently doing a billion requests from 1 IP address) or respect the robots file (looking at huawei's singapore department who also pretend to be a normal browser and slurps craptons of pages through my proxy site that was meant to alleviate load from the slow upstream server, and is therefore the only entry that my robots.txt denies)

        • lolinder 9 hours ago

          But here we're talking about Common Crawl being included in this scheme, which is explicitly designed to make it easier to use them than to make your own bad robot.

          You block Common Crawl and all you'll be left with is the abusive bots that find workarounds.

        • chii 10 hours ago

          > you still can't walk into their office thousands of times per day

          why not?

          Esp. if that receptionist is an automaton, and isn't bothered by you. Of course, if you end up taking more resources and block others from asking as well, then you need to observe some etiquette (aka, throttle etc).

          • Aachen 10 hours ago

            > why not? Esp. if that receptionist is an automaton, and isn't bothered by you

            I chose "thousands" to keep it within the realm of possibility while making it clear that it would bother a human receptionist precisely because humans aren't automatons, making the use of resources very obvious.

            If you need an analogy to understand how an automated system could suffer from resources being consumed, perhaps picture a web server and billions of requests using a certain amount of bandwidth and CPU time each. Wait, now we're back to the original scenario!

      • MrDarcy 12 hours ago

        There is no objective black and white is or is not in this situation.

        There is litigation of multiple cases and a judge making a judgement on each one.

        Until then, and even after then, publishers can set the terms and enforce those terms using technical means like this.

    • toomuchtodo 11 hours ago

      The end result is browser extensions, like Recap the Law [1] for PACER, that streams data back from participating user browsers to a target for batch processing and eventual reconciliation.

      Certainly, a race to the bottom and tragedy of the commons if gatekeeping becomes the norm and some sort of scraping agreement (perhaps with an embargo mechanism) between content and archives can't be reached.

      [1] https://free.law/recap/faq

    • billyhoffman 12 hours ago

      Licensing. Common Crawl could change the license of how the data it produces is used.

      Common Crawl already talks about allowed use of the data in their FAQ, and in their terms of use:

      https://commoncrawl.org/terms-of-use/ https://commoncrawl.org/faq

      While this doesn't currently discuss AI, they could. This would allow non-AI downstream consumers to not be penalized.

      • paxys 12 hours ago

        Licensing doesn't mean shit when no court in the country is actually willing to prosecute violations. Who have OpenAI, Anthropic, Microsoft, Google, Meta licensed all their training data from?

        • _hyn3 10 hours ago

          Copyright infringement is a civil matter.

          • paxys 10 hours ago

            And where do you think civil matters are handled?

            • _hyn3 9 hours ago

              In the U.S., civil cases are litigated by opposing attorneys in front of a judge, often without a jury, which differs from criminal cases led by prosecutors. Prosecutors (e.g., local DAs, AGs, DOJ) handle criminal trials, not civil ones like (usually) IP infringement.

              If people are exploiting your work unfairly, it's on you to take legal action in civil court. Just be aware the statute of limitations is short (often 1-4 years depending on the state), so consult a real attorney quickly. (I'm not a lawyer, so this isn't legal advice!)

    • ToucanLoucan 11 hours ago

      I mean, this is exactly what people like myself were predicting when these AI companies first started spooling up their operations. Abuse of the public square means that public goods are then restricted. It's perfectly rational for websites of any sort who have strong opinions on AI to forbid the use of common crawl, specifically because it is being abused by AI companies to train the AI's they are opposed to.

      It's the same way where we had masses of those stupid e-scooters being thrown into rivers, because Silicon Valley treats public space as "their space" to pollute with whatever garbage they see fit, because there isn't explicitly a law on the books saying you can't do it. Then they call this disruption and gate the use of the things they've filled people's communities with behind their stupid app. People see this, and react. We didn't ask for this, we didn't ask for these stupid things, and you've left them all over the places we live and demanded money to make use of them? Go to hell. Go get your stupid scooter out of the river.

  • AlienRobot 9 hours ago

    I think this is a temporary problem. In a few years many AI companies will run out of VC money, others will be only after "low-background" content made before AI spam. Maybe one day nature will heal.

  • shadowgovt 9 hours ago

    > This feels like a step down the path to a world where the majority of websites use sophisticated security products that gatekeep access to those who pay and those who don't

    ... and that future has been a long time coming. People who want an alternative to advertising-supported online content? This is what that alternative looks like. Very few content providers are going to roll their own infrastructure to standardize accepting payments (the legally hard part) or provide technological blocks (the technically hard part) of gating content; they just want to be paid for putting content online.

    • Terr_ 7 hours ago

      > People who want an alternative to advertising-supported online content? This is what that alternative looks like.

      Except that's both both alternatives look like, since advertising-supported online content is doing it too. Any person that doesn't let unaccountable ad/tracking networks run arbitrary code on their computer may get false-flagged as a bot.

  • nonrandomstring 10 hours ago

    > There are significant knock-on effects

    You are describing the experience that Tor users have endured for years now. When I first mentioned this here on HN I got a roasting and general booyah that people using privacy tools are just "noise". Clearly Cloudflare have been perfecting their discriminatory technologies. I guess what goes around comes around. "first they came for the...." etc etc.

    Anyway, I see a potential upside to this, so we might be optimistic. Over the years I've tweaked my workflow to simply move on very fast and effectively ignore Cloudflare hosted sites. I know... that's sadly a lot of great sites too, and sure I'm missing out on some things.

    On the other hand, it seems to cut out a vast amount of rubbish. Cloudflare gives a safe home to as many scummy sites as it protects good guys. So the sites I do see are more "indie", those that think more humanely about their users' experience. Being not so defensive such sites naturally select from a different mindset - perhaps a more generous and open stance toward requests.

    So what effect will this have on AI training?

    Maybe a good one. Maybe tragic. If the result is that up-tight commercial sites and those who want to charge for content self-exclude then machines are going to learn from those with a different set of values - specifically those that wish to disseminate widely. That will include propaganda and disinformation for sure. It will also tend to filter out well curated good journalism. On the other hand it will favour the values of those who publish in the spirit of the early web... just to put their own thing up there for the world.

    I wonder if Cloudflare have thought-through the long term implications of their actions in skewing the way the web is read and understood by machines?

creatonez 13 hours ago

This seems like a gimmick. Isn't preventing crawling a sisyphean task? The only real difference this will make is further entrenching big players who have already crawled a ton of data. And if this feature comes at the cost of false positives and overbearing captchas, it will start to affect users.

  • hipadev23 13 hours ago

    Companies have been trying and failing to prevent large scale crawling for 25 years. It’s a constant arms race and the scrapers always win.

    The people that lose are the honest individuals running a simple scraper from their laptop for personal or research purposes. Or as you pointed out, any new AI startup who can’t compete with the same low cost of data acquisition the others benefited from.

    • digging 10 hours ago

      > The people that lose ...

      are also everyone who makes (literally) any effort in the direction of digital privacy, whose internet experience is degraded and frustrating due to increasingly bad captchas or just outright refusal of service.

    • jeroenhd 5 hours ago

      The people that lose are the ones left with bandwidth charges and overloaded servers.

      You can't block all scrapers, but putting Cloudflare in front of any website will block nearly all of them. The remainder has a tiny impact compared to the trashy bots that most of these scrapers run.

      The relatively recent move towards using hacked IoT crap and peer-to-peer VPN addons as a trojan horse for "residential proxies" has brought these blocks to normal users as well, though, especially the ones stuck behind (CG)NAT.

      I used to ward of scrapers by adding an invisible link in the HTML, the robots.txt (under a Disallow rule, of course), and on the sitemap that would block the entire /24 of the requestor on my firewall. Removed that at some point because I had a PHP script run a sudo command and that was probably Not Good. Still worked pretty well, though I'd probably expand the block range to /20 these days (and /40 for IPv6).

  • andyp-kw 12 hours ago

    The risk of getting sued prevents companies from using pirated software.

    The big players might just pay the fee because they might one day need to prove where they got the data from.

  • spiderfarmer 12 hours ago

    My website contains millions of pages. It's not hard to notice the difference between a bot (or network) that wants to access all pages and a regular user.

    • Avamander 6 hours ago

      Oh you will not notice. The pages can easily be spread out between residential IPs using headless browsers (masked as real ones), unless you really pay attention you won't see the ones that want to hide.

      • ed_mercer 2 hours ago

        How many scrapers are sophisticated enough to go this far though? Most of them are probably of bad quality and can be detected.

    • edm0nd 8 hours ago

      Unless they are scraping it using residential botnet proxies, unique user-agents, unique device types, and etc...

    • l5870uoo9y 12 hours ago

      How often are the bots indexing it?

      • immibis 12 hours ago

        If you listen to the people complaining about bots at the moment, some bots are scraping the same pages over and over to the tune of terabytes per day because the bot operators have unlimited money and their targets don't.

        • Aachen 11 hours ago

          > because the bot operators have unlimited money

          I rather think the cause is that inbound bandwidth is usually free, so they need maybe 1/100th of the money because requests are smaller than responses (plus discounts they get for being big customers)

          • addaon 11 hours ago

            > I rather think the cause is that inbound bandwidth is usually free, so they need maybe 1/100th of the money because requests are smaller than responses (plus discounts they get for being big customers)

            Seems like there's the potential to take advantage of this for a semi-custom protocol, if there's a desire to balance costs for serving data while still making things available to end users. We'd have the server reply to the initial request with a new HTTP response instructing the client to re-request with a POST containing an N-byte (N = data size) one-time pad. The client can receive this, generate random data (or all zeros, up to the client); and the server then will send the actual response XOR'd with the one-time pad.

            Upside: Most end users don't pay for upload; if bot operators do, this incurs a dollar cost only to them. Downside: Increased download cost for the web site operator (but we've postulated that this is small compared to upload cost), extra round trip, extra time for each request (especially for end users with asymmetric bandwidth).

            Eh, just a thought.

            • Aachen 10 hours ago

              May work for small pages, like most of my webpages besides some downloadable files, but megabytes of JavaScript on an average (mobile?) connection are going to take very significantly longer to load, cost more battery, and take twice as much from your data bundle

              Perhaps it's effective as bot deterrent when someone incurs, say, a ten times higher than median load (as measured in something like CPU time per hour or bandwidth per week or so). It will not prevent anyone from seeing your pages so information is still free, but it levels the playing field -- at least, for those with free inbound bandwidth dealing with bots that pay for outgoing bandwidth

        • meiraleal 11 hours ago

          > because the bot operators have unlimited money and their targets don't.

          wget/curl vs django/rails, who wins?

  • spacebanana7 12 hours ago

    > The only real difference this will make is further entrenching big players

    It's the opposite. Only big players like google get meetings with big publishers and copyright holders to be individually whitelisted in robots.txt. Whereas a marketplace is accessible to any startup or university.

neilv 13 hours ago

Cloudflare found a new variation on their traditional service of protecting from abusers.

This time, Cloudflare has formed a "marketplace" for the abuse from which they're protecting you, partnering with the abusers.

And requiring you to use Cloudflare's service, or the abusers will just keep abusing you, without even a token payment.

I'd need to ask the lawyer how close this is to technically being a protection racket, or other no-no.

  • jsheard 13 hours ago

    > I'd need to ask the lawyer how close this is to technically being a protection racket, or other no-no.

    Wait 'til you find out how many of the DDoS-for-hire services that Cloudflare offers to protect you from are themselves protected by Cloudflare.

    • ziddoap 12 hours ago

      I hear this pretty often. I am curious what do you think Cloudfare should do?

      I am pretty sure that if they started arbitrarily banning customers/potential customers based on what some other people like or don't like, everyone would be up in arms yelling stuff about censorship or wokeness or whatever the word of the year is.

      As an example, what if I'm not a DDoS-for-hire, but just a website that sells some software capable of launching DDoS attacks? Should I be able to buy Cloudfare protection? Should a site like Metasploit be allowed to purchase protection?

      • jsheard 12 hours ago

        > As an example, what if I'm not a DDoS-for-hire, but just a website that sells some software capable of launching DDoS attacks? Should I be able to buy Cloudfare protection? Should a site like Metasploit be allowed to purchase protection?

        Would you say this nuance is a major issue on the other big cloud providers? Your own grey-area example of Metasploit is hosted on AWS without any objections. Yet the other cloud providers make a decent effort to turn away open DDoS peddlers, whenever I survey the highest ranked DDoS services it's usually around 95% Cloudflare and 5% DDoS-Guard.

        • ziddoap 11 hours ago

          I'm asking you what you think Cloudfare should do. I'm not sure why you spun it around on me.

          • jsheard 11 hours ago

            I think Cloudflare should make the bare minimum effort to kick services which are explicitly offering illegal DDoS attacks, given that their current policy of not doing anything unless legally compelled to is demonstrably enabling the overwhelming majority of DDoS providers to stay online, which has terrible optics when they're in the business of mitigating those attacks.

            Whatever slippery slope excuses they give, somehow AWS, Azure, GCP, Fastly, Akamai and so on have managed to solve the impossible problem of turning away DDoS providers without imposing Orwellian censorship in the process.

  • troyvit 12 hours ago

    As an actual content provider I see this as an opportunity. We pay our journalists real money to write real stories. If AI results haven't started affecting our search traffic they will start to soon. Up until now we've had two choices: block AI-based crawlers and fall completely out of that market, or continue to let AI companies train off of our hard-won content and take it as a loss that still generates a little bit of traffic. Cloudflare now offers a third option if we can figure out how to use it.

    Dissing on Cloudflare is the new thing, and I get it. They're big and powerful and they influence a massive amount of the traffic on the web. Like the saying goes though, don't blame the player, blame the game. Ask yourself if you'd rather have Alphabet, Microsoft, Amazon or Apple in their place, because probably one of them would be.

    • sangnoir 10 hours ago

      > If AI results haven't started affecting our search traffic they will start to soon. Up until now we've had two choices: block AI-based crawlers and fall completely out of that market, or continue to let AI companies train off of our hard-won content and take it as a loss that still generates a little bit of traffic

      You have another option, one that iFixit chose: poison[1] the data sent to AI crawlers, you may even use GenAI to generate the fake content for maximum efficiency.

      1. https://www.ifixit.com/Guide/Data+Connector++Replacement/147...

    • johnklos 11 hours ago

      > don't blame the player, blame the game

      You make it sound like this is OK. "It's not their fault that a protection racket didn't already exist. They just filled the market's need for one."

      • troyvit 7 hours ago

        I do hate it whenever somebody says that line to me, because it's up to the player to choose if they want to play, and that automatically puts them in a certain bucket.

        I believe the game is rigged from the get-go. Nobody should be able to get that big without having a level of accountability that matches their size, and our current economic system doesn't support that. That's why X can go one way with content moderation, Meta another, etc. and whole countries get pissed off. That's why I hate the game. The players have scaled past it.

        Web infrastructure is headed in that direction more and more too. I personally think that for all their reach and influence Cloudflare does a great job protecting the internet, but that can change at any time and it would be in nobody's control but Cloudflare's. For now I'm glad it's them and not AWS or Alphabet. I don't know how I'll feel in five years.

    • neilv 12 hours ago

      Not dissing any company; just pointing out a real concern to be considered, in this freshly disrupted and rapidly evolving environment.

      We all know that someone is going to try to slip one past the regulators, and they're probably on HN, and we know from the past that this can pay off hugely for them.

      Maybe, this time, the HN people who grumble about past exploiters and abusers in retrospect, can be more proactive, and help inform lawmakers and regulators in time.

      And for those of us who don't want to be activists, but also don't want to be abusers -- just run honest businesses -- we're reminded to think twice about what we do and how we do it, when we're operating in what seems like novel space.

  • gwervc 12 hours ago

    I distinctly remember Cloudfare being accused here of hosting spammers and selling protection against them a decade ago. Then suddenly the name became associated with positive things only, and the whole thing have been memory-holed.

    • robertlagrant 12 hours ago

      Sorry - what whole thing? An accusation in a comment on Hacker News?

  • TZubiri 13 hours ago

    Associating a cost with a detrimental action is a well established defense against sybil attacks.

  • theamk 2 hours ago

    doesn't seem this way?

    > Website owners can block all web scrapers using AI Audit, or let certain web scrapers through if they have deals or find their scraping beneficial.

    You don't have to make any deals, or participate in the marketplace, "block all" is right there.

    And if you are not using Cloudflare, you are going to be abused. This is a sad fact, but I have no idea why you are blaming Cloudflare and not AI companies.

  • flir 12 hours ago

    I dunno. If Cloudflare's protection doesn't work (and lets face it, it doesn't), why are you paying for it?

  • loceng 13 hours ago

    If they don't offer to just block the bots instead of you signing on, then I imagine it'd easily be seen as a racket.

    How much effort then Cloudflare puts on tracking circumvention efforts of bot networks is then another question.

  • immibis 12 hours ago

    Well, as long as Cloudflare pays you to be "abused" (by which we mean, spending more money on bandwidth) it should be no problem for many of the site owners.

  • mrits 13 hours ago

    [flagged]

  • brigadier132 13 hours ago

    [flagged]

    • neilv 12 hours ago

      > This kind of cynicism is boring.

      IMHO, this kind of thinking is only cynicism iff you're only looking for your angle to profit, and someone is peeing on your parade, every time they boorishly mention irrelevant, imaginary concerns like "ethics", "legality", or "Geneva Convention".

      • brigadier132 12 hours ago

        I have no stake in cloudflare if that's what you are implying. Your comment is boring because I can identify the same comment in almost every thread about anything on hn.

        Cynicism has become rampant and it's cliche. It's almost always based on some conspiracy theory with no basis in reality and like your original comment shows, relies on abusing emotionally loaded language.

        My theory for why cynicism is so common nowadays is that it's a coping mechanism for people who are increasingly incapable of understanding what's happening in the modern world. People are biased towards cynicism because there is nothing worse than being the gullible idiot that is scammed all the time.

        • 1986 12 hours ago

          Interesting, as my theory for why cynicism is so common nowadays is that it's a coping mechanism for people who understand perfectly well what's happening in the modern world

          • brigadier132 12 hours ago

            You don't need to be a cynic if you have a grasp on reality If your truly understand something you are capable of evaluating it on a case by case basis without resorting to pathos.

            • hu3 12 hours ago

              It's extremely rare to truly understand the agenda, motivation and consequences of complex enterprise initiatives like this one by Cloudflare. Not to mention it can be pivoted.

              So conjectures and hypothesis are not boring, but welcome in a discussion forum like HN.

              Anything else is gatekeeping discussion.

              • brigadier132 12 hours ago

                It's pretty easy to see how cloudflare arrived in a situation where they are in a position to create this sort of marketplace without resorting to conspiracies about them trying to take over the internet.

                They solved the very real problem of DDOS which consequently put them in a position to be a middleman between internet traffic between consumers and producers. Now they are expanding their business to take advantage of this privileged position they have.

                > This time, Cloudflare has formed a "marketplace" for the abuse from which they're protecting you, partnering with the abusers.

                When the original comment has a statement like this, it's a clear sign there is no potential for constructive discussion. Their understanding of markets has to be completely warped if they think a market existing constitutes partnering with one side of the exchange.

  • tempfile 11 hours ago

    The term "abuse" in this description is both confused and confusing. Websites are trying to meter out a public resource, which is something they're unable to do by themselves. Cloudflare is offering to help them, for a fee. Once the practice is metered, it isn't abuse anymore. It's just using the public service, which the website owner deliberately operates.

flaburgan 12 hours ago

I was recently speaking with people from OpenFoodFacts and OpenStreetMap, and I guess Wikipedia as the same issue. They are under constantly DDoS by bots which are scraping everything, even if the full dataset can be downloaded for free with a single HTTP request. They said this useless traffic was a huge cost for them. This is not about copyright, just about bots being stupid and people behind them not caring at all. We for sure need a solution to this. To maintain a system online nowadays means not only they get your data but you pay for that!

  • epc 11 hours ago

    I’ve just taken to blocking entire swaths of cloud services IP networks. I don’t care what the intentions are, my personal sites don’t get the infinite bandwidth to put up with a thousands of poorly written spiders.

    • MathMonkeyMan 2 hours ago

      I use a VPN when bittorrent is running, and I've found that several websites outright block me "for security reasons." They like to show me my IP address, too, like a great secret has been revealed and the SWAT team is on their way.

    • neilv 11 hours ago

      Is there a public list of those address blocks, which you'd recommend?

  • luckylion 10 hours ago

    To be fair, some 20 years ago when I wanted to do something with Wikipedia data, I scraped them too, after having tried quite a bit to use the dumps.

    - dump availability was shaky at best back then (could see months go by without successful dumps)

    - you had to fiddle with it to actually process the dumps

    - you'd get the full wikipedia content, but you didn't have the exact wikipedia mediawiki setup, so a bunch of things were not rendered

    - you couldn't get their exact version of mediawiki, because they added more than what was released openly

    Now, I'm not saying that they were wrong to do that back then, and I assume things have improved. Their mission wasn't to provide an easy way to download & import the data so it wasn't a focus topic, and they probably ran more bleeding edge versions of mediawiki and plugins that they didn't deem stable enough for general public consumption. But it made it very hard to do "the right thing", and just whipping up a script to fetch the URLs I cared about (it was in Perl back then!) was orders of magnitude faster.

    At least for me, had they offered an easy way to set up a local mirror, I would've done that. I assume this is similar for many scrapers: they're extremely experienced at building scrapers, but they have no idea how to set up some software and how to import dumps that may or may not be easy to manage, so to them the cost of writing a scraper is much smaller. If you shift that imbalance, you probably won't stop everyone from hitting your live servers, but you'll stop some because it's easier for them not to and instead get the same data from a way that you provided them.

marcus_holmes an hour ago

> If you don’t compensate creators one way or another, then they stop creating, and that’s the bit which has to get solved

I'm not sure this is true. Maybe they stop creating commercial stuff for sale, and go do something else for money, but generally creative people don't stop creating just because they can't get paid for it.

sdflhasjd 11 hours ago

How long does the world-wide-web have left? It's always felt like it would be around forever, but at some point it will fade into obscurity like IRC has done. The golden age, I feel, has been gone a while, but "AI" seems like the beginning of the end.

neilv 13 hours ago

> A demo of AI Audit shared with TechCrunch showed how website owners can use the tool to see how AI models are scraping their sites. Cloudflare’s tool is able to see where each scraper that visits your site comes from, and offers selective windows to see how many times scrapers from OpenAI, Meta, Amazon, and other AI model providers are visiting your site.

And if I didn't authorize the freeloading copyright-laundering service companies to pound my server and take my content, then I need a really good lawyer, with big teeth and claws.

  • BSDobelix 13 hours ago

    I would say let's get rid of copyright and software patents altogether ;)

    • blibble 13 hours ago

      they're already gone

      but only if you're well funded (OpenAI)

      • CaptainFever an hour ago

        Remember that open source AI exists.

      • mdaniel 11 hours ago

        I've always heard it as "the golden rule:" those who have the gold make the rules

osigurdson 8 hours ago

Next step: generate reams of content using generative AI and get paid by Cloudflare when this is scanned by generative AI.

zebomon 9 hours ago

Here's a look at my AI Audit on Bingeclock for anyone who's curious. Interesting drop in the last 48 hours given that it coincided with Cloudflare's announcement.

https://www.bingeclock.com/blog/img/ai-audit-cloudflare-0923...

The payment program sounds intriguing, I suppose. I can't imagine it will do much to move the needle for websites that will become unviable due to traffic drain. Without a doubt, AI scrapers will (quite rationally from their POV) avoid anything but nominal payments until they're forced to do otherwise.

dageshi 9 hours ago

Ahhh I love it. The era of silo's has well and truly arrived, I hope websites milk every dollar they can from the AI startups, they can afford it!

kylehotchkiss 9 hours ago

Is anybody else seeing an absolutely massive amount of Amazonbot crawls on their site? What are they up to? And why so aggressively?

  • n_ary 7 hours ago

    Most likely aspiring AI startups gathering as much data as they can before regulation jaws snap shut around them cutting off the blood stream.

    In this AI race(hype), data is finally the ultimate gold. Also at the rate the information is polluted by GenAI junk all over, any remnants of real data is holy grail.

    • kylehotchkiss 7 hours ago

      So any unknown or upcoming AIs would just show as Amazon?

sharpshadow 11 hours ago

It is indeed a huge waste to scrape the same whole site for changes and new content. If Cloudflare is capable to maintain an overview about changes and updates it could save a lot of resources.

The site could tell cloudflare directly what changed and cloudflare could tell the AI. The AI buys the changes and cloudflare pays the site keeps a margin.

  • jsheard 11 hours ago

    The sitemap.xml spec already has fields for indicating the last time a page was changed and how often it's expected to change in the future, so that search engines can optimize their updates accordingly, but AI scrapers tend to disregard that and just download the same unchanged page 10,000 times for the hell of it.

    • Aachen 9 hours ago

      > sitemap.xml spec already has fields for indicating the last time a page was changed

      I did not know that bit! I'm considering adding this to my site now, because it sounds like it would save a lot of resources for everyone. Do (m)any crawlers use this information in your experience?

delanyoyoko 11 hours ago

I guess with marketplace like this, if webmasters are happy and the AI agents are also happy, then we'll be seeing quite a few services to come up with similar solution.

Then end goal will be, from search engine optimization to something like LLM optimization or prompt engine optimization.

rahimnathwani 7 hours ago

  While it’s a bold idea, Cloudflare is not sharing a fully fleshed-out idea of what its marketplace will look like.
CatWChainsaw an hour ago

I guess Web3 will exist after all. In a microtransaction-per-webpage-utilized sense. No way websites don't start charging real people when there's money to be made.

siliconc0w 11 hours ago

Any recommendations for simple WAF tool that will stop the majority of the abuse without having to use Cloudflare? I use Cloudflare just to keep that noise away from my logs but I'm not super keen to be dependent on them.

dangoodmanUT 10 hours ago

the blog makes it seem like the bot buys access

but if they are only tracking the bot via the user agent

then can't i piggyback on that user agent?

no ai scraper is going to include an auth header when accessing your website...

synack 6 hours ago

Are they gonna let me block the scrapers that run on Cloudflare Workers?

boristsr 13 hours ago

I'm pretty interested in how companies are exploring how to properly monetize or compensate for scraped content to help keep a strong ecosystem of quality content. Id love to see more efforts like this.

  • kordlessagain 13 hours ago

    There's a HTTP code for charging for access: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/402

    Then there's a Lightning Network protocol for it: https://docs.lightning.engineering/the-lightning-network/l40...

    With the Cloudflare stuff, it just seems like an excuse to sell Cloudflare services (and continue to force everyone to use it) as opposed to just figuring out a standard way of using what is already built to provide access for some type of micropayment.

    • jsheard 13 hours ago

      The problem is that soft technical measures like HTTP 402 and robots.txt aren't legally binding, so there's nothing stopping scrapers from just ignoring them. Cloudflares value proposition here is they will play the cat-and-mouse game of detecting things like spoofed user agents and residential proxies on your behalf, and actively block what appears to be scraper traffic unless they pay up.

      Unfortunately this probably means even more CAPTCHAs for people using VPNs and other privacy measures as they ramp up the bot detection heuristics.

      • Aachen 11 hours ago

        Sure it's not legally binding, but if I see >100000 requests coming from 1 IP address within a week, I'm also not legally bound to make that 402 error go away. By having an automated payment mechanism, the two parties could come to an agreement they're both happy about

        > there's nothing stopping scrapers from just ignoring them

        Feel free to ignore HTTP errors, but those pages don't contain the content you're looking for

        (For the record, I don't use HTTP 402, but I noncommercially host stuff and know what bots people are complaining about.)

        • jsheard 11 hours ago

          I mean it's not legally binding in the sense that if you start sending 402s or 403s to a scraper it can just take that as a signal to try again from a different IP address until it works - your servers clearly stated intent that the bot should pay up or go away isn't legally actionable. With enough effort you can chase the bots until they run out of resources, but few people have time to win that battle by themselves, hence delegating it to Cloudflare or similar.

      • TZubiri 13 hours ago

        "Unfortunately this probably means even more CAPTCHAs for people using VPNs and other privacy measures as they ramp up the bot detection heuristics"

        Yeah. You can't have it both ways. Similar dilemma for requiring identification vs disallowing immigrants.

  • hedora 13 hours ago

    Companies have been trying to find novel ways to bypass fair use / public domain laws for a long time.

    Each time they do, we see more consolidation of the media, and lower pay for the people that produce the content.

    I don’t see why this particular effort will turn out differently.

    • bippihippi1 13 hours ago

      I wonder if there's a way to test this hypothesis. Does content being freely reproducible with minor modification increase the demand for content creators since new content is more valuable than the existing that can be copied.

      I'd guess that since AI can fair-useify a work faster than any human, that fair-use reviewers, compilers/collagers, re-imaginers, etc content creators will be devalued.

      However, AIs are as yet unable to create work as innovative as humans. Therefore new work should be more valuable since now there is demand from people and AIs for their work. I'm assuming that AI companies pay for the work that they use in some way. Hopefully the aggregation sites continue to compete for content creators.

      • chrisweekly 11 hours ago

        > "I'm assuming that AI companies pay for the work that they use in some way."

        That mistaken assumption is at the heart of the problem under discussion.

  • dogleash 13 hours ago

    > help keep a strong ecosystem of quality content

    To the extent quality content does exist online: what isn't either already behind a paywall, or created by someone other than who will be compensated under such a scheme?

  • tomjen3 13 hours ago

    This won't work. If you are doing an AI startup, you will want to use GoogleBot for your crawler and this will bypass that.

    Not too much of a loss, since the only quality content is already behind paywalls, or on diverse wikistyle sites. Anything served with ads for commercial reasons is automatically drivel, based on my experience. There simply isn't a business in making it better.

    Edit: updated comment to not be needlessly diversive.

    • jsheard 13 hours ago

      It is trivial to detect fake GoogleBot traffic (Google provides ways to validate it) and Cloudflare already does so. See for yourself:

        curl -I -H "User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Chrome/105.0.5195.102 Safari/537.36" https://www.cloudflare.com
      
      They'll immediately flag the request as malicious and return 403 Forbidden, even if your IP address is otherwise reputable.
      • matt-p 13 hours ago

        Now try it from a google cloud vm.

        • jsheard 13 hours ago

          Pretty sure that won't work, they let you validate whether an IP address is used by GoogleBot specifically, not just owned by Google in general. I doubt they are foolish enough to use the same pool of IP addresses for their internal crawlers and their public cloud.

          https://developers.google.com/search/docs/crawling-indexing/...

          • matt-p 12 hours ago

            It depends how the site has implemented it, a huge number just look for AS origination and *googleuserconent.com

datavirtue 2 hours ago

Wasn't the web designed to be scraped?

015a 11 hours ago

One minor, tedious thing that I've become so tired of lately is showcased very plainly in the screenshot in this article: That the Cloudflare admin dashboard has now prominently placed "AI Audit (ALPHA)" as a top-level navigation menu item at the very top of the list of a Cloudflare Account's products. Everyone is doing this, for AI products or whatever came before them, and it genuinely pushes me away from paying for Cloudflare, as I get the distinct sense that they aren't building the things or fixing the problems that I feel are important to me.

I would greatly appreciate the ability to customize the items and ordering of those items in this sidebar.

kijin 13 hours ago

AI scrapers are parasites.

I don't care whether you're OpenAI, Amazon, Meta, or some unknown startup. As soon as you generate a noticeable load on any of the servers I keep my eyes on, you'll get a blank 403 from all of the servers, permanently.

I might allow a few select bots once there is clear evidence that they help bring revenue-generating visitors, like a major search engine does. Until then, if you want training data for your LLM, you're going to buy it with your own money, not my AWS bill.

  • kccqzy 5 hours ago

    The AI scrapers are failing to discover something old-style search engines have been doing for decades: respecting a host and not giving them too much load. I'd say you did a good job banning those that generate noticeable load.

  • h8hawk 8 hours ago

    > AI scrapers are parasites.

    I've been making crawlers for a living! Thanks for informing me that I'm a parasite.

johnisgood 12 hours ago

How are they going to pay? How much? Can it be enforced?

zkid18 12 hours ago

What's wrong with AI agents accessing website content? We seem to have been happy with Google doing that for ages in exchange for displaying the website in search results.

  • red_admiral 12 hours ago

    The website owner chooses. They can say "nope" in robots.txt. Not everyone respects this, but Google does. Google can choose not to show that site as a result, if they want to.

    This adds a third option besides yes and no, which is "here's my price". Also, because cloudflare is involved, bots that just ignore a "nope" might find their lives a bit harder.

    • lolinder 12 hours ago

      Robots.txt is for crawlers. It's explicitly not meant to say one-off requests from user agents can't access the site, because that would break the open web.

      • Spivak 12 hours ago

        Yep, there's really two parts to this.

        * Some company's crawler they're planning to use for AI training data.

        * User agents that make web requests on behalf of a person.

        Blocking the second one because the user's preferred browser is ChatGPT isn't really in keeping with the hacker spirit. The client shouldn't matter, I would hope that the web is made to be consumed by more than just Chrome.

  • brigadier132 12 hours ago

    For traditional search indexing the interests of the aggregator and the content creator were aligned. AIs on the other hand are adversarial to the interest of content creators, a sufficiently advanced AI can replace the creator of the content it was trained on.

    • lolinder 12 hours ago

      We're talking in this subthread about an AI agent accessing content, not training a model on content.

      Training has copyright implications that are working their way through courts. AI agent access cannot be banned without fundamentally breaking the User Agent model of the web.

      • brigadier132 12 hours ago

        Ok, fine, let's restrict it to AI agents only, without training. It's still an adversarial relationship with the content creator. When you take an AI agent an ask it "find me the best italian restaurant in city xyz" it scans all the restaurant review sites and gives you back a recommendation. The content creator bears all the burden of creating and hosting the content and reaps non of the reward as the AI agent has now inserted itself as a middleman.

        The above is also a much clearer / more obvious case of copyright infringement than AI training.

        > AI agent access cannot be banned without fundamentally breaking the User Agent model of the web.

        This is a non-sequitur but yes you are right, everything in the future will be behind a login screen and search engines will die.

        • lolinder 12 hours ago

          > reaps non of the reward

          Just to be clear what we're talking about: the reward in question is advertising dollars earned by manipulating people's attention for profit, right?

          I frankly don't think that people have the right to that as a business model and would be more than happy to see AI agents kill off that kind of "free" content.

          • brigadier132 12 hours ago

            [flagged]

            • lolinder 12 hours ago

              Classy. Have a nice day.

              • brigadier132 12 hours ago

                Classy is being so self absorbed that you have no hesitation to say that someone providing you a service should make nothing for it.

        • Spivak 12 hours ago

          > The content creator bears all the burden of creating and hosting the content and reaps non of the reward as the AI agent has now inserted itself as a middleman.

          As a user agent my god what's happened to our industry. Locking the web to known client which are sufficiently not the user's agent betrays everything the web is for.

          Do you really hate AI so much that you'll give up everything you believe in to see it hurt?

          • brigadier132 12 hours ago

            Like I said in another comment, I'm pointing out what is going to actually happen based on incentives, not what I want to happen. I'd much rather the open web continue to exist and I think AI will be a beneficial thing for humanity.

            edit: to be clear, it's already happening. Blogs are moving to substack, twitter blocks crawling, reddit is going the same way in blocking all crawlers except google.

            • CaptainFever an hour ago

              To be optimistic, as long as anonymous access is a thing, or creating free accounts is a thing, such crawler blocks can probably be bypassed. I hope so, at least.

  • 6gvONxR4sf7o 12 hours ago

    The thing people have been doing for ages is a trade: I let you scrape me and in return you send me relevant traffic. The new choice isn't about a trade, so it's different.

  • spiderfarmer 12 hours ago

    And AI agents scrape your content in exchange for what exactly?

    • zkid18 5 hours ago

      Sorry, I distinguish here an AI agent that basically automate the visual lookup and scraping to feed into LLMs by big tech. I don't see any problem with the first one tbh.

  • lolinder 12 hours ago

    Yeah, there's a lot of confusion between AI training and AI agent access, and it's dangerous.

    Training embeds the data into the model and has copyright implications that aren't yet fully resolved. But an AI agent using a website to do something for a user is not substantially different than any other application doing the same. Why does it matter to you, the company, if I use a local LLaMA to process your website vs an algorithm I wrote by hand? And if there is no difference, are we really comfortable saying that website owners get a say in what kinds of algorithms a user can run to preprocess their content?

    • jsheard 12 hours ago

      > But an AI agent using a website to do something for a user is not substantially different than any other application doing the same.

      If the website is ad-supported then it is substantially different - one produces ad impressions and the other doesn't. Adblocking isn't unique to AI agents of course but I can see why site owners wouldn't want to normalize a new means of accessing their content which will inherently never give them any revenue in return.

      • lolinder 12 hours ago

        I don't believe that companies have the right to say that my user agent must run their ads. They can politely request that it does and I can tell my agent whether to show them or not.

        • jsheard 12 hours ago

          True, but by the same measure your user agent can politely request a webpage and the server has the right to say 403 Forbidden. Nobody is required to play by the other parties rules here.

          • lolinder 12 hours ago

            Exactly. The trouble is that companies want the benefits of being on the open web without the trade-offs. They're more than welcome to turn me down entirely, but they don't do that because that would have undesirable knock-on effects. So instead they try to make it sound like I have a moral obligation to render their ads.

j45 4 hours ago

Neat licensing idea - look forward to seeing some case studies.

Workaccount2 13 hours ago

Props to cloudaflare for referring to it as "scanning your data", which is probably the most technically accurate way to describe what AI training bots are doing.

NoMoreNicksLeft 12 hours ago

Great. The HR software my company uses can charge me when my own bot "scrapes" my paystub pdf.

AtNightWeCode 9 hours ago

Maybe they could solve some of the core issues instead. It is like CF lost the source code and just pushing new more or less useless features all the time. Even though I think this is a fair change.

micromacrofoot 7 hours ago

absent of legal changes this mostly rewards companies that figure out how to scrape without being detected, this problem has existed before AI

johnsutor 13 hours ago

Or, you know, just create your own API for your platform and charge people per request to that.

meiraleal 11 hours ago

Wow, a big tech thinking about creators not about how to extract all they can but to give back. That became so uncommon nowadays. Cloudflare deserves their exponential growth. Kudos for them.

zackmorris 10 hours ago

Boy I'm sick of clicking "Verify you are human" on everything from GitLab to banking apps running Cloudflare.

Sick enough that I hope someone prominent at the EFF or similar takes Cloudflare to court over it.

One company shouldn't be allowed to police access to the internet. And certainly shouldn't be allowed to start gatekeeping what is viewable by discriminating against the person or software doing the viewing.

I worry that Cloudflare will keep escalating this unless they're sent a strong signal that it's not supported by the tech community. If you work there, it might be time to consider getting a different job. If you own stock, maybe divest. If you're connected, perhaps your associates can buy from competitors. That's probably the only way to get the board and CEO replaced these days.

  • gruez 9 hours ago

    >Sick enough that I hope someone prominent at the EFF or similar takes Cloudflare to court over it.

    On what basis? It sucks that you can't visit those sites without going through an interstitial, but at the end of the day, those sites are essentially private property and the owners can impose whatever requirements they want on visitors. It's not any different than sites that have registration walls, for instance.

  • jeroenhd 5 hours ago

    Cloudflare is just one of many products blocking unwanted network traffic. They're the biggest, for sure, but hardly the only one. If Cloudflare disappeared tomorrow, another would pop up instantly.

    The problem isn't Cloudflare, it's that the internet is filled with ill-willed bots, and those bots seem to have infected your network or your ISPs network as well.

    If ISPs did a better job taking action against infected IoT crap and spam farms, you wouldn't need to click so many CAPTCHAs.

    Without Cloudflare, you'd just see a page saying "blocked because of supicious network activity" or nothing at all or a redirect shock site if the site admin is feeling particularly spicy. If anything, Cloudflare CAPTCHAs are doing you a service by being a cheap and effective alternative to mass IP range blocks.

  • Maxion 9 hours ago

    Cloudflare is more of a symptom of underlying problems. I for sure don't use cloudflare because I love what they do.

  • laserbeam 9 hours ago

    Something I never considered, I wonder how clicking to be a human works for people with disabilities. There’s gotta be accessibility features there, and I bet bots are abusing them.

    • gruez 9 hours ago

      At least for cloudflare "captchas", you don't have no solve anything, only click a button. Therefore it's pretty accessible. My guess is that they care less about whether you're a human or not, and more about imposing resource costs on any attacker, because solving those challenges requires a full browser runtime (ie. hundreds of megs of memory + some non-trivial amount of CPU time). That's significantly more expensive than you spamming requests.post() with on a thousand threads.

    • Wingman4l7 9 hours ago

      Or, the company leaves the accessibility alternative broken, and shrugs.

  • Icathian 9 hours ago

    Do you also get mad at companies that make locks when people install them on their front doors?

  • shadowgovt 9 hours ago

    > I worry that Cloudflare will keep escalating this unless they're sent a strong signal that it's not supported by the tech community.

    I don't think that it's not supported by the tech community. Much of that community is on the receiving end of the bad actors. I know that depending on the day I, for one, have muttered under my breath "This would be much easier if everyone were using the same damn web browser."

xyzzy_plugh 13 hours ago

Ah yes, the ol' monopoly invents an illusionary marketplace ploy.

Cloudflare is obviously right here. AI has changed things so an open web is no longer possible. /s

What absolute garbage.

giancarlostoro 13 hours ago

I really love Cloudflare. They're always up to something interesting and different. I hope we see more companies rise up similar to Cloudflare. I almost want to say Cloudflare is everything we hoped Google would be, but Google became another corporate cog machine that innovates and then scraps things up in one swoop. I don't recall the last I heard of Cloudflare spinning something up just to wind it back down? I don't think its impossible for them to make a bad choice, but I think they really think their projects through typically.

My biggest problem with AI will be once it starts getting legislated, it will just be limited in how it can function / be built, we are going to lock in existing LLMs like ChatGPT in the lead and stop anyone from competing since they wont be able to train on the same data.

My other biggest problem is "AI" or really LLMs which is what everyones hyped about, is lack of offline first capabilities.

  • nindalf 13 hours ago

    > last I heard of Cloudflare spinning something up just to wind it back down

    Cloudflare bet big on NFTs (https://blog.cloudflare.com/cloudflare-stream-now-supports-n...), Web3 (https://blog.cloudflare.com/get-started-web3/), Proof of stake (https://blog.cloudflare.com/next-gen-web3-network/). In fact they "bet on blockchain" way back in 2017 (https://blog.cloudflare.com/betting-on-blockchain/) but it's telling that they haven't published anything in the last couple of years (since Nov 2022). Since then the only crypto related content on blog.cloudflare.com is real cryptography - like data encryption.

    I'm not criticising. I'm just saying they're part of an industry that thought web3 was the Next Big Thing between 2017-2022 and then pivoted when ChatGPT released in Nov 2022. Now AI is the Next Big Thing.

    I wouldn't be surprised if a lot of the blockchain stuff got sunset over the next few years. Can't run those in perpetuity, especially if there aren't any takers.

    • giancarlostoro 12 hours ago

      Im neutral on crypto, I see it like AI, its just waiting on some breakthrough that pulls everyone. My suspicion is someone needs to make it stupid easy to get into crypto.

      • CaptainFever an hour ago

        Crypto is actually pretty useful and common in some marginalised communities where payment processors usually refuse to service (e.g. some sex stuff).

  • clvx 13 hours ago

    Someone somewhere outside of your country's legal entities can still do all the things your country doesn't like and there's little to stop them. Governments might limit legal or commercial usage but it doesn't mean it won't exist.

    • giancarlostoro 13 hours ago

      Its much harder to pull off when you're hitting an international market, are you really going to ignore an entire country? Maybe if it was a small country with few citizens, but if the EU or US passes a law, you're going to miss out on an entire market.