• Billiam
    link
    fedilink
    English
    06 months ago

    So can someone who understands this stuff better than me explain how the L3 cache would affect performance? My X3D has a 96 MB cache, and all of these offerings are lower than that.

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      0
      edit-2
      6 months ago

      This has no X3D, the L3 is shared between CCDs. The only odd thing about this is it has a relatively small “last level” cache on the GPU/Memory die, but X3D CPUs are still kings of single-threaded performance since that L3 is right on the CPU.

      This thing has over twice the RAM bandwidth of the desktop CPUs though, and some apps like that. Just depends on the use case.

  • ObsidianZed
    link
    fedilink
    English
    06 months ago

    Much like their laptops, I’m all for the idea, but what makes this desirable by those of us with no interest in AI?

    I’m out of that loop though I get that AI is typically graphics processing heavy, can this be taken advantage of with other things like video rendering?

    I just don’t know exactly what an AI CPU such as the Ryzen AI Max offers over a non-AI equivalent processor.

    • @NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      0
      edit-2
      6 months ago

      There is a massive push right now for energy efficient alternatives to nvidia GPUs for AI/ML. PLENTY of companies are dumping massive amounts of money on macs and rapidly learning the lesson the rest of us learned decades ago in terms of power and performance.

      The reality is that this is going to be marketed for AI because it has an APU which, keeping it simple, is a CPU+GPU. And plenty of companies are going to rush to buy them for that and a very limited subset will have a good experience because they don’t have time sensitive operations.

      But yeah, this is very much geared for light-moderate gaming, video rendering, and HTPCs. That is what APUs are actually good for. They make amazing workstations. I could also see this potentially being very useful for a small business/household local LLM for stuff like code generation and the like but… those small scale models don’t need anywhere near these resources.

      As for framework being involved: Someone has kindly explained to me that even though you have to replace the entire mobo to increase the amount of memory, you can still customize your side panels at any moment so I guess that is fitting the mission statement.

      • @ilinamorato@lemmy.world
        link
        fedilink
        English
        06 months ago

        For modularity: There’s also modular front I/O using the existing USB-C cards, and everything they installed uses standard connectors.

    • @Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      06 months ago

      what makes this desirable by those of us with no interest in AI?

      Juat maybe not all products need to be for everyone.
      Sometimes it’s fine if a product fits your label of “Not for me”.

    • miss phant
      link
      fedilink
      English
      0
      edit-2
      6 months ago

      I hate how power hungry the regular desktop platform is so having capable APUs like this that will use less power at full load than a comparable CPU+GPU combo at idle, is great, though it needs to become a lot more affordable.

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      0
      edit-2
      6 months ago

      There’s lots of workstation niches that are gated by VRAM size, like very complex rendering, scientific workloads, image/video processing… It’s not mega fast, but basically this can do things at a reasonable speed that you’d normally need a $20K+ computer to even try.

      That aside, the IGP is massively faster than any other integrated graphics you’ll find. It’s reasonably power efficient.

    • @unexposedhazard@discuss.tchncs.de
      link
      fedilink
      English
      06 months ago

      Much like their laptops

      Its nothing like their laptops, thats the issue :/ Soldered in stuff all around, nonstandard parts that make it useless for use as a standard PC or gaming console.

      • ObsidianZed
        link
        fedilink
        English
        06 months ago

        Sorry, I was stating that “much like their laptops, I like the idea of these desktops.” I was not trying to insinuate that they themselves are alike.

          • @commander@lemmings.world
            link
            fedilink
            English
            06 months ago

            Lol. Are you nuts? Am I really supposed to sit here and list off what makes a great product for a great price?

            Let’s be real. You don’t like how I criticized how people like you are getting taken for a ride so you’re desperate to make it seem like it’s not true.

            The sooner you realize how you’re being taken advantage of, the sooner you can start to do something about it.

            • @brucethemoose@lemmy.world
              link
              fedilink
              English
              0
              edit-2
              6 months ago

              Am I really supposed to sit here and list off what makes a great product for a great price?

              I don’t understand what you are asking for.

              You don’t have to be extensive, but… what would you want instead? A more traditional Mini PC? A dGPU instead? A different size laptop? Like, if you could actually tell Framework what you want, in brief, what would you say?

              • @commander@lemmings.world
                link
                fedilink
                English
                0
                edit-2
                6 months ago

                Fair enough.

                I skimmed it for a few seconds, got a little bit ill at the $1100 starting price, and then it occurred to me: what is this for?

                Wasn’t framework’s whole thing about making modular laptops? What value are they bringing to the mini-ITX market? They’re already modular. In fact, it looks like they’re taking away customizability with soldered RAM.

                You asked me what I want, and this is definitely what I don’t want. If they wanted to make this product appealing to me, they’d have to lower the price and live more modest lifestyles with the more modest profit margins.

                Edit: After closer inspection (albeit, not that close so I may have missed something) it looks like this… thing doesn’t even have a dedicated GPU. Yeah, framework can suck my fucking balls lol.

                You can literally get a 4070 gaming laptop these days for ~$1000 and framework is trying to push this shit? They can fuck off so hard it’s not even funny. This is why the free world never has enough to go around, because we waste our excess on dumb shit like this.

                Here’s a gaming laptop with a 4070 and a 144hz screen for $900 at Walmart:

                https://www.walmart.com/ip/Lenovo-LOQ-15-6-FHD-144Hz-Gaming-Notebook-Ryzen-7-7435HS-16GB-RAM-512GB-SSD-NVIDIA-GeForce-RTX-4070-Luna-Grey-Octa-Core-Display-Ram/13376108763

                Fuck framework.

                • @brucethemoose@lemmy.world
                  link
                  fedilink
                  English
                  0
                  edit-2
                  6 months ago

                  This is ostensibly more of a workstation/dev thing. The integrated GPU is more or less like a very power efficient laptop 4070/4080 with unlimited VRAM, depending on which APU you pick, and the CPU is very fast, with desktop Ryzen CCDs but double the memory bandwidth of what even an 9800 X3D has. In that sense, it’s a steal compared to Nvidia DIGITs or an Apple M4 Max, and Mini PC makers alternatives haven’t really solidified yet.

                  I think Framework knows they can’t compete with a $900 Walmart laptop and the crazy bulk pricing/corner cutting they do, nor can they price/engineer things (with the same bulk discounts) at the higher end like a ROG Z13/G14.

                  So… this kinda makes sense to me. They filled a gap where OEMs are enshittifying things, which feels very framework to me.

  • @javacafe@lemm.ee
    link
    fedilink
    English
    0
    edit-2
    6 months ago

    I can’t even get 96GB of RAM to work properly with the latest AMD drivers on my Framework. How will I know that AMD drivers won’t be a fuck up for this PC in configuratios over 64GB???

      • yeehaw
        link
        fedilink
        English
        06 months ago

        I have a Noctua fan in my PC. Quiet AF. I don’t hear it and it sites beside me.

    • @xradeon@lemmy.one
      link
      fedilink
      English
      06 months ago

      Hmm, probably not. I think it just has the single 120mm fan that probably doesn’t need to spin up that fast under normal load. We’ll have to wait for reviews.

      • @cholesterol@lemmy.world
        link
        fedilink
        English
        06 months ago

        I also just meant given the size constraints in tiny performance PCs. More friction in tighter spaces means the fans work harder to push air. CPU/GPU fans are positioned closer to the fan grid than on larger cases. And larger cases can even have a bit of insulation to absorb sound better. So, without having experimented with this myself, I would expect a particularly small and particularly powerful (as opposed to efficient) machine to be particularly loud under load. But yes, we’ll have to see.

    • @enumerator4829@sh.itjust.works
      link
      fedilink
      English
      06 months ago

      Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)

      IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.

      It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.

      Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg

      • @wabafee@lemmy.world
        link
        fedilink
        English
        0
        edit-2
        6 months ago

        Sound like a downgrade to me I rather have capability of adding more ram than having a soldered limited one doesn’t matter if it’s high performance. Especially for consumer stuff.

        • @Zink@programming.dev
          link
          fedilink
          English
          06 months ago

          Looking at my actual PCs built in the last 25 years or so, I tend to buy a lot of good spec ram up front and never touch it again. My desktop from 2011 has 16GB and the one from 2018 has 32GB. With both now running Linux, it still feels like plenty.

          When I go to build my next system, if I could get a motherboard with 64 or 128GB soldered to it, AND it was like double the speed, I might go for that choice.

          We just need to keep competition alive in that space to avoid the dumb price gouging you get with phones and Macs and stuff.

      • @barsoap@lemm.ee
        link
        fedilink
        English
        06 months ago

        I definitely wouldn’t mind soldered RAM if there’s still an expansion socket. Solder in at least a reasonable minimum (16G?) and not the cheap stuff but memory that can actually use the signal integrity advantage, I may want more RAM but it’s fine if it’s a bit slower. You can leave out the DIMM slot but then have at least one PCIe x16 expansion slot. A free one, one in addition to the GPU slot. PCIe latency isn’t stellar but on the upside, expansion boards would come with their own memory controllers, and push come to shove you can configure the faster RAM as cache / the expansion RAM as swap.

        Heck, throw the memory into the CPU package. It’s not like there’s ever a situation where you don’t need RAM.

        • @enumerator4829@sh.itjust.works
          link
          fedilink
          English
          06 months ago

          All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).

          The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.

          • God's hairiest twink
            link
            fedilink
            English
            06 months ago

            Couldn’t you just treat the socketed ram like another layer of memory effectively meaning that L1-3 are on the CPU “L4” would be soldered RAM and then L5 would be extra socketed RAM? Alternatively couldn’t you just treat it like really fast swap?

            • @barsoap@lemm.ee
              link
              fedilink
              English
              06 months ago

              Using it as cache would reduce total capacity as cache implies coherence, and treating it as ordinary swap would mean copying to main memory before you access it which is silly when you can access it directly. That is you’d want to write a couple of lines of kernel code to use it effectively but it’s nowhere close to rocket science. Nowhere near as complicated as making proper use of NUMA architectures.

            • Balder
              link
              fedilink
              English
              0
              edit-2
              6 months ago

              Could it work?

              Yes, but it would require:

              • A redesigned memory controller capable of tiering RAM (which would be more complex).
              • OS-level support for dynamically assigning memory usage based on speed (Operating systems and applications assume all RAM operates at the same speed).
              • Applications/libraries optimized to take advantage of this tiering.

              Right now, the easiest solution for fast, high-bandwidth RAM is just to solder all of it.

            • @enumerator4829@sh.itjust.works
              link
              fedilink
              English
              06 months ago

              Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.

              Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.

          • @barsoap@lemm.ee
            link
            fedilink
            English
            0
            edit-2
            6 months ago

            The cache hierarchy has flopped? People aren’t using swap?

            NUMA also hasn’t flopped, it’s just that most systems aren’t multi socket, or clusters. Different memory speeds connected to the same CPU is not ideal and you don’t build a system like that but among upgraded systems that’s not rare at all and software-wise worst thing that’ll happen is you get the lower memory speed. Which you’d get anyway if you only had socketed RAM.

            • Jyek
              link
              fedilink
              English
              06 months ago

              In systems where memory speed are mismatched, the system runs at the slowest module’s speed. So literally making the soldered, faster memory slower. Why even have soldered memory at that point?

              • @barsoap@lemm.ee
                link
                fedilink
                English
                0
                edit-2
                6 months ago

                I’d assume the soldered memory to have a dedicated memory controller. There’s also no hard requirement that a single controller can’t drive different channels at different speeds. The only hard requirement is that one channel needs to run at one speed.

                …and the whole thing becomes completely irrelevant when we’re talking about PCIe expansion cards the memory controller doesn’t care.

            • @enumerator4829@sh.itjust.works
              link
              fedilink
              English
              06 months ago

              Yeah, the cache hierarchy is behaving kinda wonky lately. Many AI workloads (and that’s what’s driving development lately) are constrained by bandwidth, and cache will only help you with a part of that. Cache will help with repeated access, not as much with streaming access to datasets much larger than the cache (i.e. many current AI models).

              Intel already tried selling CPUs with both on-package HBM and slotted DDR-RAM. No one wanted it, as the performance gains of the expensive HBM evaporated completely as soon as you touched memory out-of-package. (Assuming workloads bound by memory bandwidth, which currently dominate the compute market)

              To get good performance out of that, you may need to explicitly code the memory transfers to enable prefetch (preferably asynchronous) from the slower memory into the faster, á la classic GPU programming. YMMW.

              • @barsoap@lemm.ee
                link
                fedilink
                English
                06 months ago

                I wasn’t really thinking of HPC but my next gaming rig, TBH. The OS can move often accessed pages into faster RAM just as it can move busy threads to faster cores, gaining you some fps a second or two after alt-tabbing back to the game after messing around with firefox. If it wasn’t for memory controllers generally driving channels all at the same speed that could already be a thing right now. It definitely already was a thing back in the days of swapping out to spinning platters.

                Not sure about HBM in CPUs in general but with packaging advancement any in-package stuff is only going to become cheaper, HBM, pedestrian bandwidth, doesn’t matter.

      • @unphazed@lemmy.world
        link
        fedilink
        English
        06 months ago

        Honestly I upgrade every few years and isually have to purchase a new mobo anyhow. I do think this could lead to less options for mobos though.

        • @confusedbytheBasics@lemm.ee
          link
          fedilink
          English
          06 months ago

          I get it but imagine the GPU style markup when all mobos have a set amount of RAM. You’ll have two identical boards except for $30 worth of memory with a price spread of $200+. Not fun.

        • @enumerator4829@sh.itjust.works
          link
          fedilink
          English
          06 months ago

          I don’t think you are wrong, but I don’t think you go far enough. In a few generations, the only option for top performance will be a SoC. You’ll get to pick which SoC you want and what box you want to put it in.

          • @GamingChairModel@lemmy.world
            link
            fedilink
            English
            06 months ago

            the only option for top performance will be a SoC

            System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

            But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.

              • @GamingChairModel@lemmy.world
                link
                fedilink
                English
                0
                edit-2
                6 months ago

                No, I don’t think you owe an apology. It’s a super common terminology almost to the point where I wouldn’t really even consider it outright wrong to describe it as a SoC. It’s just that the blurred distinction between a single chip and multiple chiplets packaged together are almost impossible for an outsider to tell without really getting into the published spec sheets for a product (and sometimes may not even be known then).

                It’s just more technically precise to describe them as SiP, even if SoC functionally means something quite similar (and the language may evolve to the point where the terms are interchangeable in practice).

      • @exocortex@discuss.tchncs.de
        link
        fedilink
        English
        06 months ago

        There’s even the next iteration already happening: Cerebras is maling wafer-scale chipa with integrated SRAM. If you want to have the highest memory-bandwith to your cpu core it has to lay exactly next to it ON the chip.

        Ultimately RAM and processor will probably be indistinguishable with the human eye.

      • @Nalivai@lemmy.world
        link
        fedilink
        English
        06 months ago

        Apparently AMD wasn’t able to make socketed RAM work, timings aren’t viable. So Framework has the choice of doing it this way or not doing it at all.

        • @JcbAzPx@lemmy.world
          link
          fedilink
          English
          06 months ago

          In that case, not at all is the right choice until AMD can figure out that frankly brain dead easy thing.

          • @alphabethunter@lemmy.world
            link
            fedilink
            English
            06 months ago

            “brain dead easy thing”… All you need is to just manage signal integrity of super fast speed ram to a super hungry state of the art soc that benefits from as fast of memory as it can get. Sounds easy af. /s

            They said that it was possible, but they lost over half of the speed doing it, so it was not worth it. It would severely cripple performance of the SOC.

            The only real complaint here is calling this a desktop, it’s somewhere in between a NUC and a real desktop. But I guess it technically sits on a desk top, while also being an itx motherboard.

      • Jyek
        link
        fedilink
        English
        06 months ago

        Signal integrity is a real issue with dimm modules. It’s the same reason you don’t see modular VRAM on GPUs. If the ram needs to behave like VRAM, it needs to run at VRAM speeds.

        • Natanox
          link
          fedilink
          English
          06 months ago

          Then don’t make it work like that. Desktop PCs are modular and Framework made a worse product in terms of modularity and repairability, the main sales of Framework. Just, like… wtf. This Framework product is cursed and shouldn’t exist.

          • @brucethemoose@lemmy.world
            link
            fedilink
            English
            06 months ago

            There’s little point in framework selling a conventional desktop.

            I guess they could have made another laptop size with the the dev time, but… I dunno, this seems like a niche that needs to be filled.

            • @Manalith@midwest.social
              link
              fedilink
              English
              06 months ago

              This is where I’m at. The Framework guy was talking about how very few companies are using this AMD deal because the R&D to add it to existing models wasn’t very viable, you really only have the Asus Z13 so I feel like being ahead of the game there will be a benefit in the long run as far as their relationship with AMD. Plus they’re also doing a 12-in laptop now as well, so it’s not like they committed all their resources to this.

  • @brucethemoose@lemmy.world
    link
    fedilink
    English
    0
    edit-2
    6 months ago

    Holy moly this is awesome! I am in for the 128GB SKU.

    I know people are going to complain about non upgradable memory, but you can just replace the board, and in this case it’s so worth it for the speed/power efficiency. This isn’t artificial crippling, it physically has to be soldered, at least until LPCAMM catches on.

    My only ask would be a full X16 (or at least a physical X16) PCIe slot or breakout ribbon. X4 would be a bit of a bottleneck for some GPUs.

    • @alleycat@lemmy.world
      link
      fedilink
      English
      06 months ago

      What’s a SKU? Google just says “Stock Keeping Unit”, but I don’t think that’s correct in this context.

    • adr1an
      link
      fedilink
      English
      06 months ago

      How did you get from 128 GiB of RAM, as the reported specs, to 96 VRAM ?

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        06 months ago

        “VRAM” has to be allocated to the integrated GPU in the BIOS, and reports (and previous platforms) suggest the max one can allocate is 96GB, or 3/4 of it.

    • @slacktoid@lemmy.ml
      link
      fedilink
      English
      06 months ago

      I understand the memory constraints but it does feel weird for framework, is all I have to say. But that’s also the general trajectory of computing from what it seems. I really want lpcamm to catch on!

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        0
        edit-2
        6 months ago

        Eventually most system RAM will have to be packaged anyway. Physics dictates that one pays a penalty going over pins and mobo traces, and it gets more severe with every advancement.

        It’s possible that external RAM will eventually evolve into a “2nd tier” of system memory, for background processes, spillover, inactive programs/data, things like that.

      • @Scholars_Mate@lemmy.world
        link
        fedilink
        English
        06 months ago

        Apparently Framework did try to get AMD to use LPCAMM, but it just didn’t work from a signal integrity standpoint at the kind of speeds they need to run the memory at.

          • Avid Amoeba
            link
            fedilink
            English
            0
            edit-2
            6 months ago

            My AM5 system doesn’t post with 128GB of 5600 DDR5 at higher than 4400 at JEDEC timings and voltage. 2 DIMMs are fine. 4 DIMMs… rip. So I’d say the present of DIMMs is already a bit shaky. DIMMs are great for lots of cheap RAM. I paid a lot less than what I’d have to pay for the equivalent size of RAM in a Framework desktop.

    • ArchRecord
      link
      fedilink
      English
      06 months ago

      For the performance, it’s actually quite reasonable. 4070-like GPU performance, 128gb of memory, and basically the newest Ryzen CPU performance, plus a case, power supply, and fan, will run you about the same price as buying a 4070, case, fan, power supply, and CPU of similar performance. Except you’ll actually get a faster CPU with the Framework one, and you’ll also get more memory that’s accessible by the GPU (up to the full 128gb minus whatever the CPU is currently using)

        • ArchRecord
          link
          fedilink
          English
          06 months ago

          “It’s too expensive”

          “It’s actually fairly priced for the performance it provides”

          “You people must be paid to shill garbage”

          ???

          Ah yes, shilling garbage, also known as: explaining that the price to performance ratio is just better, actually.

    • @naonintendois@programming.dev
      link
      fedilink
      English
      06 months ago

      You can order those directly from chip suppliers (mouser, digikey, arrow, etc.) for a lower cost than you could get them from framework. Also those are going to be very difficult to solder/desolder. You’re going to need a hot air station, and you need to pre-warm the board to manage the heat sink from the ground planes.

  • @wise_pancake@lemmy.ca
    link
    fedilink
    English
    0
    edit-2
    6 months ago

    Question about how shared VRAM works

    So I need to specify in the BIOS the split, and then it’s dedicated at runtime, or can I allocate VRAM dynamically as needed by workload?

    On macos you don’t really have to think about this, so wondering how this compares.

  • @Jollyllama@lemmy.world
    link
    fedilink
    English
    06 months ago

    Calling it a gaming PC feels misleading. It’s definitely geared more towards enterprise/AI workloads. If you want upgradeable just buy a regular framework. This desktop is interesting but niche and doesn’t seem like it’s for gamers.

  • ZeroOne
    link
    fedilink
    English
    0
    edit-2
    6 months ago

    Really framework ? Soldered ram ? How dissapointing

    • @twisterpop3@lemmy.world
      link
      fedilink
      English
      06 months ago

      The CEO of Framework said that this was because the CPU doesn’t support unsoldered RAM. He added that they asked AMD if there was any way they could help them support removable memory. Supposedly an AMD engineer was tasked with looking into it, but AMD came back and said that it wasn’t possible.

      • Adam
        link
        fedilink
        English
        06 months ago

        Specifically AMD said that it’s achievable but you’ll be operating at approx 50% of available bandwidth, and that’s with LPCAMM2. SO/DIMMs are right out of the running.

        Mostly this is AMDs fault but if you want a GPU with 96-110 GBs of memory you don’t really have a choice.

  • @grue@lemmy.world
    link
    fedilink
    English
    06 months ago

    “To enable the massive 256GB/s memory bandwidth that Ryzen AI Max delivers, the LPDDR5x is soldered,” writes Framework CEO Nirav Patel in a post about today’s announcements. “We spent months working with AMD to explore ways around this but ultimately determined that it wasn’t technically feasible to land modular memory at high throughput with the 256-bit memory bus. Because the memory is non-upgradeable, we’re being deliberate in making memory pricing more reasonable than you might find with other brands.”

    😒🍎

    • Toes♀
      link
      fedilink
      English
      06 months ago

      Would 256GB/s be too slow for large llms?

        • @Acters@lemmy.world
          link
          fedilink
          English
          06 months ago

          Many LLM operations rely on fast memory and gpus seem to have that. Even though their memory is soldered and vbios is practically a black box that is tightly controlled. Nothing on a GPU is modular or repairable without soldering skills(and tools).

      • @enumerator4829@sh.itjust.works
        link
        fedilink
        English
        06 months ago

        Because you’d get like half the memory bandwidth to a product where performance is most likely bandwidth limited. Signal integrity is a bitch.

      • @grue@lemmy.world
        link
        fedilink
        English
        06 months ago

        From what I understand, they did try, but AMD couldn’t get it to work because of signal integrity issues.

    • @simple@lemm.ee
      link
      fedilink
      English
      06 months ago

      To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.

      • @Cenzorrll@lemmy.world
        link
        fedilink
        English
        06 months ago

        My biggest gripe about non replaceable components is the chance that they’ll fail. I’ve had pretty much every component die on me at some point. If it’s replaceable it’s fine because you just get a new component, but if it isn’t you now have an expensive brick.

        I will admit that I haven’t had anything fail recently like in the past, I have a feeling the capacitor plague of the early 2000s influenced my opinion on replaceable parts.

        I also don’t fall in the category of people that need soldered components in order to meet their demands, I’m happy with raspberry pis and used business PCs.

        • @gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          06 months ago

          You can get an MS-A1 barebones from minisforum right now for like 215 - BYO cpu, ddr5, and m2. But it’s got oculink on the back (the pcie dock is 100, but not mandatory if you’re not going to use it). I think it’s supposed to be on sale for another couple days.

      • @Vinstaal0@feddit.nl
        link
        fedilink
        English
        06 months ago

        According to the CEO in the LTT video about this thing it was a design choice made by AMD because otherwise they cannot get the ram speed they advertise.

      • @4am@lemm.ee
        link
        fedilink
        English
        06 months ago

        They still could; this seems aimed at the AI/ML research space TBH

        • @unexposedhazard@discuss.tchncs.de
          link
          fedilink
          English
          06 months ago

          Yeah exactly, its worthless… Even the big players already admit to the AI hype being over. This is the worst possible thing to launch for them, its like they have no idea who their customers are.

          • @Rexios@lemm.ee
            link
            fedilink
            English
            06 months ago

            The AI hype being over doesn’t mean no one is working on AI anymore. LLMs and other trained models are here to stay whether you like it or not.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        06 months ago

        Yeah.

        But that’s AMD’s fault, as they gimped the GPU so much on the lower end. There should be a “cheap” 8-core, 1-CCD part with close to the full 40 CUs… But there is not.

  • @SuperSleuth@lemm.ee
    link
    fedilink
    English
    06 months ago

    What’s crazy is I still can’t make it onto their website without waiting in a 20 minute queue. Stupid.

  • Diplomjodler
    link
    fedilink
    English
    06 months ago

    I really hope this won’t be too expensive. If it’s reasonably affordable i might just get one for my living room.

    • @Dudewitbow@lemmy.zip
      link
      fedilink
      English
      06 months ago

      they already announced pricing for them.

      1099 for the base ai max model with 32gb(?), 1999 for fully maxed with the top sku.

      • @jivandabeast@lemmy.browntown.dev
        link
        fedilink
        English
        06 months ago

        $1k for the base isn’t horrible IMO, especially if you compare it to something like the mac mini starting at $600 and ballooning over $1k to increase to 32GB of “unified memory” and 1tb of storage.

        I get why people are mad about the non-upgradable memory but tbh I think this is the direction the industry is going to go as a whole. They can’t get the memory to be stable and performant while also being removable. It’s a downside of this specific processor and if people want that they should just build a PC

        • @Dudewitbow@lemmy.zip
          link
          fedilink
          English
          0
          edit-2
          6 months ago

          i actually think its not the worst priced framework product ironically. Prebuilt 1k pcs tend to be something like a high end cpu + 4060 desktop anyways, so specs wise, its relatively speaking, reasonable. take for example cyberpower pcs build here, which is of the few oems iirc Gamers Nexus thinks doesn’t charge as much of a SI tax on assembly. it’s acutally not incredibly far off performance wise. I’d argue its the most value Framework product per dollar ironically.