Announcement

Collapse
No announcement yet.

AMD Ryzen 7 7840HS Linux Performance With The TUXEDO Pulse 14 Gen 3

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Ryzen 7 7840HS Linux Performance With The TUXEDO Pulse 14 Gen 3

    Phoronix: AMD Ryzen 7 7840HS Linux Performance With The TUXEDO Pulse 14 Gen 3

    When it comes to AMD Zen 4 laptop testing to date I've done a lot of testing with the Ryzen 7 7840U as well as the Ryzen 7 PRO 7840U which have proved to be very capable 8-core / 16-thread laptop processors with performant integrated graphics and running great on Linux -- besides the current lack of Ryzen AI. Recently TUXEDO Computer sent over their newly announced Pulse 14 Gen 3 Linux laptop featuring the Ryzen 7 7840HS part, which is the focus of today's testing.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Does not seem like a good part for a laptop due to power efficiency, but for a desktop it could be an acceptable solution for higher performance.

    Comment


    • #3
      Thanks for the benchmarks

      What are your thoughts on including some performance metrics of popular local LLMs?

      Thinking about what kind of tokens / second that can be expected on new hardware going forward.

      I am sure several of the benchmarks you already include that measure some of the algorithms behind running an LLM, however I'd think the metrics from running an actual LLM would be akin to running a specific video game benchmark vs the algorithms that run underneath its game engine.

      Wondering if you've tried something easy like Mozilla's Llamafile, which is an all-in-one single file executable, that includes the model and user interface.

      Distribute and run LLMs with a single file. Contribute to Mozilla-Ocho/llamafile development by creating an account on GitHub.


      For example, the LLava model they've provided generates 4.5-5 tokens per second on my oled Steamdeck and I am wondering how these Ryzen 7840 chips compare.

      I have been looking for a new laptop for a few months and your review of the Thinkpad P14s Gen 4 AMD has put it at the top of my list. Its 64Gb ram adds a bit of future proofing in case running LLMs locally becomes a standard practice. (Have you posted any long term opinions of that laptop?)

      Thanks again for the great resource you provide here!

      Comment


      • #4
        Originally posted by plipt View Post
        Thanks for the benchmarks

        What are your thoughts on including some performance metrics of popular local LLMs?

        Thinking about what kind of tokens / second that can be expected on new hardware going forward.

        I am sure several of the benchmarks you already include that measure some of the algorithms behind running an LLM, however I'd think the metrics from running an actual LLM would be akin to running a specific video game benchmark vs the algorithms that run underneath its game engine.

        Wondering if you've tried something easy like Mozilla's Llamafile, which is an all-in-one single file executable, that includes the model and user interface.

        Distribute and run LLMs with a single file. Contribute to Mozilla-Ocho/llamafile development by creating an account on GitHub.


        For example, the LLava model they've provided generates 4.5-5 tokens per second on my oled Steamdeck and I am wondering how these Ryzen 7840 chips compare.

        I have been looking for a new laptop for a few months and your review of the Thinkpad P14s Gen 4 AMD has put it at the top of my list. Its 64Gb ram adds a bit of future proofing in case running LLMs locally becomes a standard practice. (Have you posted any long term opinions of that laptop?)

        Thanks again for the great resource you provide here!
        I often have been testing with Whisper cpp and llama-cpp lately... It's my first time hearing of llamafile actually, but will check it out.

        Any other good LLMs that are easy to setup for automating and benchmarking you'd recommend?
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #5
          Any other good LLMs that are easy to setup for automating and benchmarking you'd recommend?
          Honestly my experience is very limited, and from everything I have seen the landscape of running LLMs locally has been changing so rapidly so I'd imagine that would be one of the largest challenges for you to standardize these tests between benchmarks ran months apart.

          Having said that, the Mozilla Llamafile, being an everything-included single file, might make that easier to reuse the same exact LLM setup months later?

          Who can say when this field will settle down, but how these models perform seems an increasingly important metric to me.

          For example lots of hype from CES about upcoming CPUs that include an NPU, but nobody seems to know what they are good for yet. Dont even seem usable on linux.

          Thanks again

          Comment


          • #6
            miss the real games benchmarks to see what point is intel vs apu igpu

            Comment


            • #7
              I'm wondering if the latest ROCm works with the RDNA3 based 780M iGPU?

              Comment


              • #8
                Originally posted by mwelss View Post
                I'm wondering if the latest ROCm works with the RDNA3 based 780M iGPU?
                According to my reading of this GitHub issue on the official ROCm repo it doesn't appear to be officially supported, but users have it working?

                Hi Does latest ROCm 5.7 support Radeon 780M (gfx1103)? This chip is part of mobile cpu Ryzen 7940HS. if its not supported - is there any plans to add support of this GPU?

                Comment


                • #9
                  Originally posted by varikonniemi View Post
                  Does not seem like a good part for a laptop due to power efficiency, but for a desktop it could be an acceptable solution for higher performance.
                  I disagree. The Intel chip had generally less performance for more power draw, and higher power spikes.

                  While I'd agree the 7840U in Michael's test was clocking at a point on its volt-frequency curve that was slightly more efficient, the HS part could provide better absolute performance in a chassis with sufficient power and cooling. I'd also be curious how the BIOS is configured between the two machines and if they limit power on battery (showing a bigger difference in perf or battery life).

                  Comment


                  • #10
                    interesting...

                    Zooming in on the power graphs, it looks like the Ryzens were very consistent. Each test ran three times. And the three power spikes were almost exactly the same.

                    Power on the 155H was inconsistent on some tests.

                    The 155H often had a super aggressive power spike on the first run.

                    I'm guessing the 155H often hit a turbo/thermal/power limit and throttled? To the point that on some tests you can see the graphs for the 155H are much longer. i.e. the 155H had to re-run the tests multiple times to calculate an accurate average with all the turbo/power limit/throttle dance going on under the hood?​

                    Comment

                    Working...
                    X