By: Peter Lewis (peter.delete@this.notyahoo.com), June 6, 2022 3:41 am
Room: Moderated Discussions
> is Neural Radiance Fields which are, to hear certain people say it, the next thing in 3D technology,
> past both polygons and ray tracing. Is there anything to this?
Neural Radiance Fields are amazing for generating new views from multiple photos of one object but I wouldn’t call them “past both polygons and ray tracing”. It’s a different use case than polygons and ray tracing. There are plenty of other uses for neural networks in computer graphics. I saw a demo of shadows generated by a neural network at GTC. I think it used a sparse set of ray traces, but I don’t know the details.
> Is the magic bullet AR?
High-performance AR is compelling for some types of specialized work such as surgery with MRI/CT/ultrasound image overlay and repairing complex machinery. I don’t think high-performance AR is needed for consumers. The AR apps that translate foreign language street signs are cool but that doesn’t need high-performance because the images are static. People don’t want to be around “Glassholes” who have a camera on their glasses.
urbandictionary.com/define.php?term=Glasshole
> Is it language?
Any application that improves people’s writing is always welcome. I don’t know how this Hemingway app works, but it would probably work better if it used transformers.
hemingwayapp.com/desktop.html
> what's the maximum speed at which MS will implement anything taking advantage of an NPU?
Microsoft needs to support an extremely wide selection of hardware so Microsoft can’t require an NPU in the foreseeable future. Microsoft’s Dragon Naturally Speaking could make use of an NPU when an NPU is present. Zoom has a “touch up my appearance” filter that does not require any special hardware. Photoshop has features to slim a person’s face, perform a digital nose job and remove blemishes without blurring the whole image (Liquify filter, Neural Filters, Spot Healing Brush, Patch tool). Imagine those Photoshop features done on real-time video. That would require serious computing power. Based on the size of the global beauty industry ($511B/year), it’s safe to say many women would pay more for a processor that would make them look better in video meetings. The process of including neural nets in popular apps will be gradual. That’s why I think having a variety of different ratios of CPU chiplets and neural accelerator chiplets on a module is a good approach. It allows the market to find the right ratio of neural accelerator hardware and easily change the ratio over time. If someone wants a digital nose job, the driver for their webcam might need a processor with an extra neural accelerator chiplet. If buying a neural accelerator chiplet made women look better in video meetings, it would be impossible to make enough of those things.
> past both polygons and ray tracing. Is there anything to this?
Neural Radiance Fields are amazing for generating new views from multiple photos of one object but I wouldn’t call them “past both polygons and ray tracing”. It’s a different use case than polygons and ray tracing. There are plenty of other uses for neural networks in computer graphics. I saw a demo of shadows generated by a neural network at GTC. I think it used a sparse set of ray traces, but I don’t know the details.
> Is the magic bullet AR?
High-performance AR is compelling for some types of specialized work such as surgery with MRI/CT/ultrasound image overlay and repairing complex machinery. I don’t think high-performance AR is needed for consumers. The AR apps that translate foreign language street signs are cool but that doesn’t need high-performance because the images are static. People don’t want to be around “Glassholes” who have a camera on their glasses.
urbandictionary.com/define.php?term=Glasshole
> Is it language?
Any application that improves people’s writing is always welcome. I don’t know how this Hemingway app works, but it would probably work better if it used transformers.
hemingwayapp.com/desktop.html
> what's the maximum speed at which MS will implement anything taking advantage of an NPU?
Microsoft needs to support an extremely wide selection of hardware so Microsoft can’t require an NPU in the foreseeable future. Microsoft’s Dragon Naturally Speaking could make use of an NPU when an NPU is present. Zoom has a “touch up my appearance” filter that does not require any special hardware. Photoshop has features to slim a person’s face, perform a digital nose job and remove blemishes without blurring the whole image (Liquify filter, Neural Filters, Spot Healing Brush, Patch tool). Imagine those Photoshop features done on real-time video. That would require serious computing power. Based on the size of the global beauty industry ($511B/year), it’s safe to say many women would pay more for a processor that would make them look better in video meetings. The process of including neural nets in popular apps will be gradual. That’s why I think having a variety of different ratios of CPU chiplets and neural accelerator chiplets on a module is a good approach. It allows the market to find the right ratio of neural accelerator hardware and easily change the ratio over time. If someone wants a digital nose job, the driver for their webcam might need a processor with an extra neural accelerator chiplet. If buying a neural accelerator chiplet made women look better in video meetings, it would be impossible to make enough of those things.