Its impressive that Llama and the Ai teams in general survived the meta-verse push at Facebook. Congrats to the team for keeping their heads down and saving the company from itself.
Its all Ai all the time now though, not seen any mention of our reimagined future of floating heads hanging out together in quite some time.
I'm working in Quest 3 almost every day. I use Immersed, as it implements virtual displays for my MacBook better than others, but I'm impressed with the Meta ecosystem. Granted, social interaction is still awkward without proper face expressions, but it feels closer each year to the depicted vision.
I recently travelled and needed to work (coding and video editing in DaVinci) a lot in hotels and random places. I can't bring large screens everywhere (and I hate to work with small fonts and screens), and Quest 3 was a perfect fit here. Sometimes at home or office (I have a private one), I just don't want to sit on my buttoks all the time, so I put on VR goggles and can keep working in any position (lying on a sofa or even sunbathing outdoors).
As soon as new XR/MR glasses become lighter (there are some good ones already - Visor, Beyond BigScreen 2, etc), more and more people will discover how usable and optimized for work this tech is.
I'm quite a big fan of my Quest 1 as a cheap flight sim headset, too. I don't end up using it more than maybe twice a week, but that's more than worth-it for the $400 I paid 5 years ago. It installs (or "sideloads" in present vernacular) Android apps like any other device, browses the web, and streams wireless VR from my desktop via ALVR when I want to play games. It does a lot of stuff you wouldn't expect out of a "depreciated" piece of hardware.
The trepidation behind VR for professional applications makes sense to me - it's expensive and tough to compare with what it's replacing. As a pure vehicle for fun though, I genuinely have no regrets with my Quest hardware. It was easily a better purchase than my Xbox One.
Feels like Meta is going to Cloud services business but in AI domain. They resisted entering cloud business for so long, with the success of AWS/Azure/GCP I think they are realizing they can't keep at the top only with social networks without owning a platform (hardware, cloud)
SAM's a really cool model, that's something to look forward to. I didn't see that in the LlamaCon notes, is that something they've announced elsewhere or just a rumor atm?
In this case the market basically validated itself. Companies are already using Llama for production workloads. It is offered as a first class LLM option in AWS, Azure, GCP and all other major hosting providers. Meta may have been getting marginal licensing fees out of it but now wants a bigger piece of the pie.
They seem to see tge writing on the wall and have been panicked for a while, yes.
Gobbling up rising brands kept their finances going for a while, but the grand Metaverse pivot was clearly their (much struggling) attempt to invent their own titanic platform akin to Android or iPhone.
With that not gaining as much traction as they wanted as quickly as they wanted, they're still on the hunt, as here.
Does anyone use llama as their primary model for any usecase? Maybe it's my fault for not spending much time with it, but I still couldn't find the applications for which llama has an advantage over the competition.
I recently needed to classify thousands of documents according to some custom criteria. I wanted to use LLM classification from these thousands of documents to train a faster, smaller BERT (well, ModernBERT) classifier to use across millions of documents.
For my task, Llama 3.3 was still the best local model I could run. I tried newer ones (Phi4, Gemma3, Mistral Small) but they produced much worse results. Some larger local models are probably better if you have the hardware for them, but I only have a single 4090 GPU and 128 GB of system RAM.
I didn't try original BERT at all because I didn't get good results from any LLMs on small document excerpts, so I assumed that a substantial context was necessary for good results. Traditional BERT only accepts up to 512 tokens, while ModernBERT goes up to 8192. I ended up using a 2048 token limit.
Can someone explain to me please why Meta doesn't create subject specific versions of their LLMs such as one that knows only about computer programming, computers, hardware software.
I would have imagined such a thing would be smaller and thus run on smaller configurations.
But since I am only a layman maybe someone can tell me why this isn't the case?
One of the weirdest and most interesting parts of LLMs is that they grow more effective the more languages and disciplines they are trained in. It turns out training LLMs on code instead of just prose boosted their intelligence and reasoning capabilities by huge amounts.
Generally, all that non-tech content still helps the model “to learn”.
Also, the software you’re working on will generally in some way have a real-world domain - without knowing it the AI all likely be a less effective assistant. Design conversations with it would likely be pretty non-fun, too.
Finally, the “bitter lesson” article[0] from a couple years ago is I think somewhat applicable too.
To add on to the sibling. Specialized models, including fine tuned ones, continually have their lunch eaten by general models within 3-6 months. This time round is mixture of experts that’ll do it, next year it’ll be something else. Tuned models are expensive to produce and are benchmark kings but less do less well in the real world qualitative experience. The juice just ain’t worth the squeeze most of the time.
Meta does have some specialized models though, llamaguard was released for llama 2 and 3.
Other companies have done this (see Qwen Coder). It doesn't scale past a few disciplines like math and code though, and using mixtures of experts give you most of the same benefits.
Facebook did a great job open sourcing Llama and pushing the market to being competitive, but this list seems super shallow.
0. Introducing Llama API in preview
This one is good but not centre stage worthy. Other [closed] models have been offering this for a long time.
1. Fast inference with Llama API
How fast? and how must faster than others? This section talks about latency and there's absolutely no numbers in this section!
2. New Llama Stack integrations
Speculations with 0 new integration. Llama Stack with NVIDIA had already been announced and then this section ends with '...others on new integrations that will be announced soon. Alongside our partners, we envision Llama Stack as the industry standard for enterprises looking to seamlessly deploy production-grade turnkey AI solutions.'
3. New Llama Protections and security for the open source community
This one is not only the best on this page, but is actually good with announcement of - Llama Guard 4, LlamaFirewall, and Llama Prompt Guard 2
4. Meet the Llama Impact Grant recipients
Sorry but neither the gross amount $1.5 million USD, nor the average $150K/recipients is anything significant at Facebook scale.
Beyond solid benchmarks, Alibaba's power move was dropping a bunch of models available to use and run locally today. That's disruptive already and the slew of fine tunes to come will be good for all users and builders.
Qwen3 is a family of models, the very smallest are only a few GB and will run comfortably on virtually any computer of the last 10 years or recent-ish smart phone. The largest - well, depends how fast you want it to run.
Meta needs to stop open-washing their product. It simply is not open-source. The license for their precompiled binary blob (ie model) should not be considered open-source, and the source code (ie training process / data) isn’t available.
They've painted themselves into a corner - the second people see the announcement that they've enforced the license on someone, people will switch to actual open source licensed models and Meta's reputation will take a hit.
It's ironic that China is acting as a better good faith participant in open source than Meta. I'm sure their stakeholders don't really care right now, but Meta should switch to Apache or MIT. The longer they wait the more invested people will be and the more intense the outrage when things go wrong.
This is actually my first impression while I am reading the post. Mentions "open source" everywhere but dude how the earth it is open source without training data.
Almost no company is going to release training data because they don't want to waste time with lawsuits. That's why it doesn't happen. Until governments fix that issue, I don't even think the "it's not really open without training data!!!" argument is worth any time. It's more worth focusing on the various restrictions in the LLaMA license, or even better, questioning whether model weights can be licensed at all.
The problem, in my opinion, is that MZ/CC/AA-D, are feeling that they have to be releasing models of some flavor every month to stay competitive.
And when you have the rest of the company planning to throw you a on-stage party to announce whatever next model, and the venue and guests are paid for, you're gonna have the show whether the content is good or not.
Llama program right now is "we must go faster." But without a clear product direction or niche that they're trying to build towards. Very little is said no to. Just be the best at everything. And they started from behind, how can you think you're gonna catch up to 1-2 year head start, just with more people? The line they want to believe is "the best LLM, not just the best OSS LLM".
Because of the constant pressure to release something every month (nearly, but not a huge exaggeration), and the product direction coming from MZ himself, the team is not really great at anything. There is a huge apparatus of people working on it, yet half of it or more, I believe, is baggage required because of what Meta is.
I guess we'll see how long this can be maintained.
Visit SE Asia sometime and you'll experience a very different sentiment. Hundreds of millions of people rely on Meta to provide valuable services every day, some of them borderline essential. This is undebatable.
The outsized public hatred toward Meta is almost entirely driven by a bureaucratic, anti-technology Europe (that has finally realized that their overstepping is hurting their future) and a US political institution that needed someone to demonize to keep us all distracted.
There are very good reasons to dislike Meta and Meta products. But they're likely not the ones you're referring to.
Their business model ties profitability directly to maximal surveillance and psychological manipulation, as the basis for inducing addiction, manufactured demand, and impulse spending. With only theatrical attempts at hiding the lack of inhibitions or safeguards about harnessing material damaging to children, teens, adults and society at large.
That is the economic structure of their business model.
Now juice that model with $ billions of revenue and $ trillions in potential market cap for shareholders, who demand double digit percentage growth per year.
That defines the scale of available resources to drive the business model forward.
This is a machine designed to scale up and maximally leverage seemingly small conflicts of interest into a global monster that feeds on mental and social decay.
——
Of course, it benefits Facebook and customers to mix in as much genuine side products and services with real value as possible.
But that only wedges the destructive core into individual lives and society even more.
Now add AI algorithms to their core competencies of surveillance integration and psychological manipulation, and to the side value honey features.
We are getting Stockholm’ed and stewed in a lot of high walled slow cookers these days.
What kind of services? IM app? Thailand uses LINE, Vietnam: Zalo, Cambodia and Myanmar Telegram and Viber, in Indonesia many uses few IM at the same time.
It is not the IM apps, SE region which I suspect the author is referencing predominately uses Whatsapp.
The value in SE is mostly B2C, instead of marketplace feature, most local tiny(or even big ones willing to evade tax by not having any physical presence) businesses will open a small business or general page and publish their wares as posts. Lives will be used to demo products or services now and then. People follow these pages and flock over to buy things.
In a sense, Facebook and Whatsapp are like Amazon/Aliexpress of SE Asia.
I was there for 5 months visiting a friend(and recovering from burnout), and number of people using such pages to sell anything from basic clothing to food to services are HUGE! It is literally a huge business hub for people to discover and make online purchases. In summary, Facebook pages are the e-commerce front(due to lack of shopify/amazon and similar operators who can handle logistics and payments) for individual businesses.
There were many journalist reports about this phenomenon several years back, but I am too sleepy and tired to link those.
Dumb fucks is what the founder of the company has thought about it's users since day 1. They've been caught lying to cover up terrible things they've done so many times it's just assumed at this point. Anyone relying on their services is being taken advantage of first by Meta, and then by their own failed economy that won't provide an alternative. I've never once considered what Europe thinks.
Its impressive that Llama and the Ai teams in general survived the meta-verse push at Facebook. Congrats to the team for keeping their heads down and saving the company from itself.
Its all Ai all the time now though, not seen any mention of our reimagined future of floating heads hanging out together in quite some time.
I'm working in Quest 3 almost every day. I use Immersed, as it implements virtual displays for my MacBook better than others, but I'm impressed with the Meta ecosystem. Granted, social interaction is still awkward without proper face expressions, but it feels closer each year to the depicted vision.
I recently travelled and needed to work (coding and video editing in DaVinci) a lot in hotels and random places. I can't bring large screens everywhere (and I hate to work with small fonts and screens), and Quest 3 was a perfect fit here. Sometimes at home or office (I have a private one), I just don't want to sit on my buttoks all the time, so I put on VR goggles and can keep working in any position (lying on a sofa or even sunbathing outdoors).
As soon as new XR/MR glasses become lighter (there are some good ones already - Visor, Beyond BigScreen 2, etc), more and more people will discover how usable and optimized for work this tech is.
I'm quite a big fan of my Quest 1 as a cheap flight sim headset, too. I don't end up using it more than maybe twice a week, but that's more than worth-it for the $400 I paid 5 years ago. It installs (or "sideloads" in present vernacular) Android apps like any other device, browses the web, and streams wireless VR from my desktop via ALVR when I want to play games. It does a lot of stuff you wouldn't expect out of a "depreciated" piece of hardware.
The trepidation behind VR for professional applications makes sense to me - it's expensive and tough to compare with what it's replacing. As a pure vehicle for fun though, I genuinely have no regrets with my Quest hardware. It was easily a better purchase than my Xbox One.
It’s coming: https://www.uploadvr.com/meta-employees-reportedly-working-w...
I am guessing because of Qwen 3 release they pulled back the reasoning model that was likely due to launch today.
Feels like Meta is going to Cloud services business but in AI domain. They resisted entering cloud business for so long, with the success of AWS/Azure/GCP I think they are realizing they can't keep at the top only with social networks without owning a platform (hardware, cloud)
If Lidl can venture into cloud business, I guess so can Meta.
Don't forget the earths only bookstore either
SAM 3 (Segment Anything Model) is coming this summer
SAM's a really cool model, that's something to look forward to. I didn't see that in the LlamaCon notes, is that something they've announced elsewhere or just a rumor atm?
It was mentioned briefly. https://ai.meta.com/sam3
In this case the market basically validated itself. Companies are already using Llama for production workloads. It is offered as a first class LLM option in AWS, Azure, GCP and all other major hosting providers. Meta may have been getting marginal licensing fees out of it but now wants a bigger piece of the pie.
They seem to see tge writing on the wall and have been panicked for a while, yes.
Gobbling up rising brands kept their finances going for a while, but the grand Metaverse pivot was clearly their (much struggling) attempt to invent their own titanic platform akin to Android or iPhone.
With that not gaining as much traction as they wanted as quickly as they wanted, they're still on the hunt, as here.
The metaverse is a great idea but they should have partnered with Epic for this or Valve. The implementation was subpar
Does anyone use llama as their primary model for any usecase? Maybe it's my fault for not spending much time with it, but I still couldn't find the applications for which llama has an advantage over the competition.
I recently needed to classify thousands of documents according to some custom criteria. I wanted to use LLM classification from these thousands of documents to train a faster, smaller BERT (well, ModernBERT) classifier to use across millions of documents.
For my task, Llama 3.3 was still the best local model I could run. I tried newer ones (Phi4, Gemma3, Mistral Small) but they produced much worse results. Some larger local models are probably better if you have the hardware for them, but I only have a single 4090 GPU and 128 GB of system RAM.
How did you find ModernBERT performance Vs prior BERT models?
I didn't try original BERT at all because I didn't get good results from any LLMs on small document excerpts, so I assumed that a substantial context was necessary for good results. Traditional BERT only accepts up to 512 tokens, while ModernBERT goes up to 8192. I ended up using a 2048 token limit.
Here https://research.atspotify.com/2024/12/contextualized-recomm... ;)
It's pretty popular in the local LLM space
It used to, but Llama 4 is useless for local LLM for most people.
Can someone explain to me please why Meta doesn't create subject specific versions of their LLMs such as one that knows only about computer programming, computers, hardware software.
I would have imagined such a thing would be smaller and thus run on smaller configurations.
But since I am only a layman maybe someone can tell me why this isn't the case?
One of the weirdest and most interesting parts of LLMs is that they grow more effective the more languages and disciplines they are trained in. It turns out training LLMs on code instead of just prose boosted their intelligence and reasoning capabilities by huge amounts.
Source? Sounds interesting
Generally, all that non-tech content still helps the model “to learn”.
Also, the software you’re working on will generally in some way have a real-world domain - without knowing it the AI all likely be a less effective assistant. Design conversations with it would likely be pretty non-fun, too.
Finally, the “bitter lesson” article[0] from a couple years ago is I think somewhat applicable too.
[0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
To add on to the sibling. Specialized models, including fine tuned ones, continually have their lunch eaten by general models within 3-6 months. This time round is mixture of experts that’ll do it, next year it’ll be something else. Tuned models are expensive to produce and are benchmark kings but less do less well in the real world qualitative experience. The juice just ain’t worth the squeeze most of the time.
Meta does have some specialized models though, llamaguard was released for llama 2 and 3.
Other companies have done this (see Qwen Coder). It doesn't scale past a few disciplines like math and code though, and using mixtures of experts give you most of the same benefits.
Facebook did a great job open sourcing Llama and pushing the market to being competitive, but this list seems super shallow.
0. Introducing Llama API in preview
This one is good but not centre stage worthy. Other [closed] models have been offering this for a long time.
1. Fast inference with Llama API
How fast? and how must faster than others? This section talks about latency and there's absolutely no numbers in this section!
2. New Llama Stack integrations
Speculations with 0 new integration. Llama Stack with NVIDIA had already been announced and then this section ends with '...others on new integrations that will be announced soon. Alongside our partners, we envision Llama Stack as the industry standard for enterprises looking to seamlessly deploy production-grade turnkey AI solutions.'
3. New Llama Protections and security for the open source community
This one is not only the best on this page, but is actually good with announcement of - Llama Guard 4, LlamaFirewall, and Llama Prompt Guard 2
4. Meet the Llama Impact Grant recipients
Sorry but neither the gross amount $1.5 million USD, nor the average $150K/recipients is anything significant at Facebook scale.
Anyone manage to sign up for the waitlist? I just get a redirect loop back to the login when requesting access.
No new model? Maybe after the Qwen 3 release today they decided to hold back on Llama 4 Thinking until it benchmarks more competitively.
Beyond solid benchmarks, Alibaba's power move was dropping a bunch of models available to use and run locally today. That's disruptive already and the slew of fine tunes to come will be good for all users and builders.
https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2...
What's the minimum GPU/NPU hardware and memory to run Qwen3 locally?
There is a 0.6B model so basically nothing.
And the MoE 30B one has a decent shot at running OK without GPU. I'm on a 5800x3d so two generations old and its still very usable
`model.safetensors` for Qwen3-0.6B is a single 1.5GB file.
Qwen3-235B-A22B has 118 `.safetensors` files at 4GB each.
There are a bunch of models and quants between those.
Does it run in 8x80G? Or does the KV cache and other buffers push it over the edge?
I'm running 4B on my 8GB AMD 7600 via ollama
Qwen3 is a family of models, the very smallest are only a few GB and will run comfortably on virtually any computer of the last 10 years or recent-ish smart phone. The largest - well, depends how fast you want it to run.
There are models down to 0.6B and you can even run Qwen3 30B-A3B reasonably fast on CPU only.
They released the Llama 4 suite three weeks ago.
Meta needs to stop open-washing their product. It simply is not open-source. The license for their precompiled binary blob (ie model) should not be considered open-source, and the source code (ie training process / data) isn’t available.
They've painted themselves into a corner - the second people see the announcement that they've enforced the license on someone, people will switch to actual open source licensed models and Meta's reputation will take a hit.
It's ironic that China is acting as a better good faith participant in open source than Meta. I'm sure their stakeholders don't really care right now, but Meta should switch to Apache or MIT. The longer they wait the more invested people will be and the more intense the outrage when things go wrong.
Applying Apache or MIT to a binary blob doesn't make it open source either
> the source code (ie training process / data) isn’t available
The training data is all scraped from the internet, ebooks from libgen, papers from Sci-Hub, and suchlike.
They don't have the right to redistribute it.
This is actually my first impression while I am reading the post. Mentions "open source" everywhere but dude how the earth it is open source without training data.
Almost no company is going to release training data because they don't want to waste time with lawsuits. That's why it doesn't happen. Until governments fix that issue, I don't even think the "it's not really open without training data!!!" argument is worth any time. It's more worth focusing on the various restrictions in the LLaMA license, or even better, questioning whether model weights can be licensed at all.
Unlucky timing for meta...
It's not about luck, pretty sure that Qwen intentionally bullied them.
Was there a ball pit
Yeah, it was for the Llama team because they love playing in ball pits instead of releasing good models.
did I read well that they have a gated 3.3 8b?
Meh
Lmao why are they doing LlamaCon, a convention with a subpar product?
This actually a legit question under the surface.
The problem, in my opinion, is that MZ/CC/AA-D, are feeling that they have to be releasing models of some flavor every month to stay competitive.
And when you have the rest of the company planning to throw you a on-stage party to announce whatever next model, and the venue and guests are paid for, you're gonna have the show whether the content is good or not.
Llama program right now is "we must go faster." But without a clear product direction or niche that they're trying to build towards. Very little is said no to. Just be the best at everything. And they started from behind, how can you think you're gonna catch up to 1-2 year head start, just with more people? The line they want to believe is "the best LLM, not just the best OSS LLM".
Because of the constant pressure to release something every month (nearly, but not a huge exaggeration), and the product direction coming from MZ himself, the team is not really great at anything. There is a huge apparatus of people working on it, yet half of it or more, I believe, is baggage required because of what Meta is.
I guess we'll see how long this can be maintained.
There is a potential world where Meta uses AI as a vector to tap into the home.
Like, literally building smart homes.
Locally intelligent in ways that enable truly magical smart home experiences while preserving privacy and building trust.
But connected in ways that facilitate pseudo-social interactions, entertainment, and commerce.
Meta's biggest competitors are Apple and Amazon. This is the first clear opportunity they've had to leapfrog both.
>There is a potential world where Meta [is]... literally building smart homes... while preserving privacy and building trust
I'm earnestly not sure what Meta are less qualified for. Building physical homes or building privacy & trust.
Visit SE Asia sometime and you'll experience a very different sentiment. Hundreds of millions of people rely on Meta to provide valuable services every day, some of them borderline essential. This is undebatable.
The outsized public hatred toward Meta is almost entirely driven by a bureaucratic, anti-technology Europe (that has finally realized that their overstepping is hurting their future) and a US political institution that needed someone to demonize to keep us all distracted.
There are very good reasons to dislike Meta and Meta products. But they're likely not the ones you're referring to.
Their business model ties profitability directly to maximal surveillance and psychological manipulation, as the basis for inducing addiction, manufactured demand, and impulse spending. With only theatrical attempts at hiding the lack of inhibitions or safeguards about harnessing material damaging to children, teens, adults and society at large.
That is the economic structure of their business model.
Now juice that model with $ billions of revenue and $ trillions in potential market cap for shareholders, who demand double digit percentage growth per year.
That defines the scale of available resources to drive the business model forward.
This is a machine designed to scale up and maximally leverage seemingly small conflicts of interest into a global monster that feeds on mental and social decay.
——
Of course, it benefits Facebook and customers to mix in as much genuine side products and services with real value as possible.
But that only wedges the destructive core into individual lives and society even more.
Now add AI algorithms to their core competencies of surveillance integration and psychological manipulation, and to the side value honey features.
We are getting Stockholm’ed and stewed in a lot of high walled slow cookers these days.
What kind of services? IM app? Thailand uses LINE, Vietnam: Zalo, Cambodia and Myanmar Telegram and Viber, in Indonesia many uses few IM at the same time.
It is not the IM apps, SE region which I suspect the author is referencing predominately uses Whatsapp.
The value in SE is mostly B2C, instead of marketplace feature, most local tiny(or even big ones willing to evade tax by not having any physical presence) businesses will open a small business or general page and publish their wares as posts. Lives will be used to demo products or services now and then. People follow these pages and flock over to buy things.
In a sense, Facebook and Whatsapp are like Amazon/Aliexpress of SE Asia. I was there for 5 months visiting a friend(and recovering from burnout), and number of people using such pages to sell anything from basic clothing to food to services are HUGE! It is literally a huge business hub for people to discover and make online purchases. In summary, Facebook pages are the e-commerce front(due to lack of shopify/amazon and similar operators who can handle logistics and payments) for individual businesses.
There were many journalist reports about this phenomenon several years back, but I am too sleepy and tired to link those.
Dumb fucks is what the founder of the company has thought about it's users since day 1. They've been caught lying to cover up terrible things they've done so many times it's just assumed at this point. Anyone relying on their services is being taken advantage of first by Meta, and then by their own failed economy that won't provide an alternative. I've never once considered what Europe thinks.