This week at Computex 2025, we got to see the Intel Arc Pro B50 and B60 GPUs. These are lower cost and lower power GPUs with what Intel hopes to be a major key differentiator: memory capacity. In the next room over, we saw the Intel 18A Panther Lake system running.
Intel Arc Pro B50 and B60 GPUs Shown at Computex 2025
One of the big announcements was for the Intel Arc Pro B50 and B60 GPUs. This is Intel’s entry into the workstation graphics market with the “Pro” versions of its 2nd Gen Battlemage GPUs. The B50 is going to be priced around $299 and the B60 our sense will be in the $399 or so range but that is going to be up to board partners.

One of the big selling points of the Intel Arc Pro B50 is that it is a 16GB card in 70W and at a lower price point of a few hundred dollars. Intel’s idea, which is a good one, is that this is a card for small systems that want to add GPUs with decent amounts of memory so that they can run LLMs. This is a good idea because NVIDIA has to protect its higher-end GPU margins.

The Intel Arc B50 is a neat form factor since at 70W it is a bus powered design that does not need a power connector.

It is also a low-profile design like the RTX 4000 Ada SFF we saw in our Dell Precision 3280 Compact Review.

Intel says that its number one application for performance uplift over the A50 generation is inference, but a lot of that has to do with the 16GB memory footprint.

The next step up is the 24GB Intel Arc B60 that uses in the 120-200W range.

Something to notice is that Intel is focused on SR-IOV here which is a big win for those who want to use virtualized GPUs.
Here is the chip we found:

It is somewhat a good thing that this is a professional workstation part since the gaming folks may see the underside as a “L”.

Intel showed off a number of engineering sample designs. The B50 parts were standard Intel designs, but the B60 had a decent amount of variation including even a dual GPU card which would then have 48GB onboard.

One of the more interesting ones was the 4x Intel Arc Pro B60 Turbo GPUs in single slot 3U passive designs that can be used in servers or workstations.

Intel’s idea, is Project Battlematrix with up to eight GPUs on four dual GPU cards.

Intel’s idea with this is that it will leverage OneAPI and Intel’s software stack also in Linux to be a powerful AI machine.

My feedback to Intel was very blunt. This is a great idea, but it is not a workstation. This is the new workgroup server for AI much like those Sun Ultra 10’s that were in Silicon Valley cubes running local services for teams (and sometimes much more than that.) I would find it exceedingly difficult to desire a system like this for myself. I would, however, find it to be extremely useful as a LLM machine for a team of 8-12 people. It was the workstation team doing the presentation, but what they were envisioning was actually the role of a server.

Our sense is that while the parts are sampling now, these are going to be Q3 GPUs. That should align with some of the AI inference work being readied and just before some of the virtualization features land.
Intel Panther Lake 18A Booted and Running Applications
On the Panther Lake side, we got to see a neat demo. For those who do not know, Panther Lake on the consumer side, and Clearwater Forest on the Xeon side are the big Intel 18A process chips we are eagerly awaiting. Intel 18A is where Intel can regain process leadership from TSMC, making them a big deal in the industry.

There was news in August 2024 that Clearwater Forest and Panther Lake Booted on Intel 18A. At Computex 2025, seven or so months later, we saw them running live with Windows and demos like this DaVinci Resolve masking demo.

It was at least cool to see these running.
Final Words
The Intel B50 and B60 GPUs are going to be really interesting when they hit the market. LLMs are often memory capacity limited more than they are just limited by the raw performance of a GPU. Between the commitment to relatively high memory capacity per GPU, lower power, lower cost, and the focus on the Linux/ virtualization stack, these might be the go-to small server GPUs for many. NVIDIA is leaving the low-end wide open for others because it is much more focused on the enormous opportunity for high-end GPUs.
I have to say, as someone who was not excited by the gaming demos, I came away very excited about the AI and virtualization side. In 2021 during the Raja’s Chip Notes Lay Out Intel’s Path to Zettascale, I told Raja, then the lead of Intel’s GPU efforts, that the low-end AI inference and virtualization market was there for the taking. It looks like Intel is finally getting around to it years after his departure.
Of course, Panther Lake is firmly in the waiting to see when these arrive bucket.
We also have a bit from the demo rooms here at Computex 2025 in our latest Substack:
You did a review of an ASRock Rack GNRD8-2L2T back in Nov. That looks like it could make a good workstation, if using one of the 136 pcie5 lane Xeon-6 chips. Any sighting of those motherboards at Computex?
Only the B60 gets SR-IOV? Or is there an out of tree driver than can put used similar to the integrated GPU.
I’m curious if I could use these in my home server for having a bit more grunt with my security system. They should also have a bit more transcoding grunt when doing two things at the same time which is where my N100 miniPC’s are falling over.
With as low cost as these are I’m definitely thinking about getting the B50 for my local home server, these look like a really good deal dollar-wise and TBP when compared with Nvidia GPUs that seem to just be going up in power and size
I hope B50 has SR-IOV too. If it does, I might get one just because.
Hopefully the B50/B60 will be much easier to obtain than the A40.
There’s a typo in the headline – “Pather” should be “Panther”
It could have been a really big deal if all these new cards had been on Xe3. The Xe3 GPUs apparently work, since they are showing demos. The Xe3 GPUs reportedly support FP8, so this would have been a way for Intel to better support the FP8 trained DeepSeek models. LBT missing a chance to do something bold.