Announcement

Collapse
No announcement yet.

Intel NPU Driver Preparing Hardware Scheduler & Profiling Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel NPU Driver Preparing Hardware Scheduler & Profiling Support

    Phoronix: Intel NPU Driver Preparing Hardware Scheduler & Profiling Support

    The Intel iVPU accelerator driver changes for the upcoming Linux 6.10 merge window have been submitted for advancing the Neural Processing Unit (NPU) support found since the launch of Meteor Lake with Intel Core Ultra notebook CPUs. For this iVPU/NPU driver in Linux 6.10 are a few notable new features...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Call me when it can automatically install Gentoo install and make me look super dapper and handsome on Discord and Teams calls. 💁‍♂️💁‍♂️💁‍♂️.

    Really though, is this of any use yet for Stable Diffusion and for local LLMs?

    Comment


    • #3
      I am wondering, can this be used using Mesa's Teflon?

      Comment


      • #4
        Originally posted by Eirikr1848 View Post
        Call me when it can automatically install Gentoo install and make me look super dapper and handsome on Discord and Teams calls. 💁‍♂️💁‍♂️💁‍♂️.

        Really though, is this of any use yet for Stable Diffusion and for local LLMs?
        You can run Stable Diffusion and a few other things on it with Intel's GIMP plugins. It's a little disappointing: The integrated GPU is faster. You can use both together, though it still doesn't come anywhere close to my desktop RTX 3060 12GB.

        Of course, the main benefit is power savings for features you'd use on the go, like AI webcam effects and things of that nature.

        Comment


        • #5
          Originally posted by Eirikr1848 View Post
          Really though, is this of any use yet for Stable Diffusion and for local LLMs?
          It seems the Intel NPU acceleration library works at least on smaller LLMs like TinyLlama.
          VPU should already accelerate Stablediffusion as they are mentioned on the torch-compile page in the execution diagram but on the same page aren't explicitly mentioned on the "Support for Automatic1111 Stable Diffusion WebUI" section so hard to say.

          Comment


          • #6
            This should kill the need for any gpu in DIY nvr systems. A coral TPU only has 4tops and very limited memory and people use fine up to 10 cameras. With 20 tops and power this low, face recognition with more accurate models will scale really well.

            Comment

            Working...
            X