NVIDIA RTX Unreal Engine 5.2 Branch Released

NVIDIA has announced the release of the NVIDIA RTX Branch of Unreal Engine 5.2, a version of the popular game engine that integrates NVIDIA’s RTX and neural rendering technologies. The NVIDIA RTX Branch of Unreal Engine 5.2 brings the latest advancements of the engine, such as Nanite, Lumen, and MetaHuman Creator, together with NVIDIA’s ray tracing, DLSS, and NGX features.

NVIDIA ACE for Games Sparks Life Into Virtual Characters With Generative AI

Featured RTX Technologies

The NVIDIA RTX Branch of Unreal Engine 5.2 includes several RTX technologies that enhance the graphics and performance of games and applications. These are:

  • RTX Global Illumination (RTXGI): RTXGI provides scalable solutions to compute infinite multi-bounce lighting and soft-shadow occlusions without bake times, light leaks, or expensive per-frame costs (NvRTX 5.0 & 4.27 ONLY). Learn more about RTXGI.
  • RTX Direct Illumination (RTXDI): RTXDI lets artists add unlimited shadow-casting, dynamic lights to game environments in real time without worrying about performance or resource constraints. Learn more about RTXDI.
  • Deep Learning Super Sampling (DLSS): DLSS leverages the power of a deep learning neural network to boost frame rates and generate beautiful, detailed images for your games. Learn more about DLSS.
  • NVIDIA Real-Time Denoisers (NRD): NRD is a spatio-temporal, API-agnostic denoising library that’s designed to work with low ray-per-pixel signals. Learn more about NRD.
  • Deep Learning Anti-Aliasing (DLAA): An AI-based anti-aliasing mode that uses the same technology powering NVIDIA DLSS, giving you even better graphics in your games. Learn more about DLAA.
  • NVIDIA Image Scaling (NIS): A platform agnostic software driver-based spatial upscaler for all games. Learn more about NIS.

What can developers do with the NVIDIA RTX Branch of Unreal Engine 5.2?

Developers can use the NVIDIA RTX Branch of Unreal Engine 5.2 to create stunning and realistic games and applications that leverage the power of NVIDIA’s GPUs and AI. The branch also includes support for NVIDIA Omniverse, a platform that enables collaborative creation and simulation across multiple applications and devices.

This branch is available now for developers to download and use from the NVIDIA Developer website. The branch is compatible with Unreal Engine 5 Early Access 2 and requires an NVIDIA RTX GPU to run.

How can developers get started with the branch?

Getting Started with RTXDI and NvRTX in Unreal Engine 5 (Part 1)

NVIDIA has also provided a number of sample projects and tutorials to help developers get started with the branch, such as the Marbles RTX demo, the DLSS Playground, and the MetaHuman Creator sample. Developers can also access the NVIDIA Developer forums and Discord server for support and feedback.

NVIDIA plans to update the branch regularly with new features and improvements as Unreal Engine 5 evolves. The company also encourages developers to share their feedback and suggestions on how to improve the branch and its integration with NVIDIA’s technologies.

Link: Branch of UE (NvRTX)

Why is this branch important for game development and graphics?

This branch is a testament to NVIDIA’s commitment to advancing the state of the art in game development and graphics. By combining the power of Unreal Engine 5 with NVIDIA’s RTX and neural rendering technologies, developers can create immersive and realistic experiences that push the boundaries of what is possible.

Resources:

NV Branch of UE (NvRTX)

NVIDIA Game Developer on LinkedIn

https://twitter.com/NVIDIAGameDev/status/1673740368528166913

/news/

NVIDIA RTX Unreal Engine

Salama Productions and SaudiGN agree on a new cooperation to promote indie games

Salama Productions, an independent video game developer based in Cairo, Egypt, and SaudiGN, a platform that supports game development in the Arab region, especially Saudi Arabia, have agreed on a new cooperation to boost communication and interaction between indie game developers and the audience interested in game news.

The cooperation, which was reached on June 13, 2023, aims to deliver unique and exclusive content about indie games produced or supervised by Salama Productions, such as their upcoming title Off the Grid: Bad Dream, a cinematic and story-driven mystery and thriller game. The content will be featured by SaudiGN regularly on their social media channels, as well as on Salama Productions’ own platforms.

The cooperation will also benefit both sides by increasing their visibility, exposure, and reputation in the gaming industry and community. Salama Productions will gain access to a wider and more engaged audience in Saudi Arabia and the region, while SaudiGN will enhance its credibility and diversity by showcasing high-quality indie games from Egypt.

Both collaborators have expressed their excitement and enthusiasm for the cooperation.

Salama Productions’ founder, Mohamed Fawzy, said: “We are thrilled to cooperate with SaudiGN, a platform that shares our passion for indie games. We believe this will help us connect with possible future fans and supporters in Saudi Arabia, the Arab region and beyond.”

SaudiGN’s Community & Production Manager, Nabil Alsaiad, said: “It’s an honor to be a partner with such ambitious people and looking forward to working together.

SaudiGN is a community for independent and professional game developers that seeks to grow the game development market in both public and private sectors by providing an appropriate environment resulting from the establishment of a cooperative developer community, providing services and events to developers at all levels, and improving career and self-employment opportunities. SaudiGN also aims to increase community awareness of game development, support local developers and highlight them globally so that the Kingdom becomes a place of attraction to those interested in the field.

The cooperation is not limited by time, with frequent updates and content releases. Both parties hope to maintain a long-term collaboration that will keep the audience entertained and informed.

Thank you for your attention and interest!


Resources:

Electric Dreams Environment UE: A Stunning Showcase of UE 5.2

Epic Games has released a new sample project Electric Dreams Environment UE for Unreal Engine 5.2 that demonstrates the power and potential of the latest features and technologies available in the engine. The project, called Electric Dreams Environment, is a futuristic cityscape that was first shown at the 2023 Game Developers Conference.

Electric Dreams Environment Sample Project Unreal Engine

What is Electric Dreams Environment and How Was It Built?

Electric Dreams Environment is a sample project that showcases several new, experimental features and technologies that are available in Unreal Engine 5.2, including:

  • Procedural Content Generation framework (PCG): A system that allows users to create tools, define rules, and expose parameters to populate large scenes with UE assets of their choice, making the process of large world creation fast, interactive, and efficient.
  • Substrate: A material authoring system that enables users to create complex materials with dynamic properties and behaviors using a node-based interface.
  • Unreal Engine’s latest physics developments: A set of features and improvements that enhance the realism and interactivity of physical simulations in UE, such as Chaos Destruction, Niagara Fluid Simulation, and Soundscape.

The project also showcases several existing Unreal Engine 5 features, such as:

  • Lumen: A fully dynamic global illumination solution that reacts to scene and light changes.
  • Nanite: A virtualized micropolygon geometry system that enables users to create scenes with massive amounts of geometric detail.
  • MetaSounds: A high-performance audio system that gives users complete control over sound creation and processing.

The Electric Dreams Environment was built by incorporating both traditional and procedural workflows directly within Unreal Engine using the PCG framework. The PCG framework allows users to create custom tools that can generate content based on rules and parameters that can be adjusted at runtime. For example, the project uses PCG tools to create buildings, roads, bridges, vegetation, props, and more.

The project also uses Substrate to create materials with dynamic properties and behaviors that react to the environment and the player’s actions. For example, the project uses Substrate to create materials for windows, holograms, neon signs, water puddles, and more.

How to Download and Explore it?

Electric Dreams Environment is a free sample project that anyone can download and use for learning purposes. The project is available from the Samples tab of the Epic Games Launcher or from the Unreal Engine Marketplace.

The project is a graphically intensive project that requires a powerful video card to run at a stable framerate. The recommended hardware specifications are as follows:

  • 12-core CPU at 3.4 GHz
  • 64 GB of system RAM
  • GeForce RTX 3080 (equivalent or higher)
  • At least 10 GB VRAM

The project also requires DirectX 12 support and up-to-date graphics drivers.

The project consists of several levels that demonstrate different aspects of the Electric Dreams Environment. Users can navigate the levels using keyboard and mouse controls or by using a drone controller. Users can also trigger different sequences that showcase the PCG tools, Substrate materials, physics features, and more.

Why Electric Dreams Environment Matters

Electric Dreams Environment is a stunning showcase of what Unreal Engine 5.2 can do and what users can achieve with it. The project demonstrates how users can create large-scale environments with high-fidelity graphics and realistic physics using both traditional and procedural workflows within UE.

The project also shows how users can leverage the new features and technologies in UE 5.2 to create immersive and interactive experiences with dynamic lighting, materials, sound, and more.

Epic Games hopes that Electric Dreams Environment will inspire users to explore the possibilities of Unreal Engine 5.2 and create their own amazing projects with it.

Electric Dreams and the Rivian R1T | UE5.2 Demo | GDC 2023

Resources:

Electric Dreams Environment | PCG Sample Project – Unreal Engine

Epic Games Unreal Engine Electric Dreams Art Blast – ArtStation Magazine

Electric Dreams Environment in Unreal Engine | Unreal Engine 5.2 Documentation

More articles: /news/

Electric Dreams Environment UE

RealityScan: A New App for Creating 3D Models from Photos

Epic Games, the creators of Unreal Engine and Fortnite, have partnered with Capturing Reality, a leading developer of photogrammetry software, to bring you RealityScan, a free app that lets you create stunning 3D models from photos on your Android device.

RealityScan Is Now Also Available for Android

How RealityScan Works

The app, called RealityScan, is free to download and use, and it works by taking multiple photos of an object or a scene from different angles and then processing them into a 3D mesh that can be exported to Unreal Engine or other 3D applications.

RealityScan app

Features and Benefits of RealityScan App

RealityScan is designed to be easy and intuitive to use, and it offers various features such as automatic alignment, color correction, texture generation, and mesh optimization. Users can also preview their 3D models in augmented reality (AR) mode or share them online with other creators.

RealityScan app

Epic Games’ Vision for 3D Content Creation

According to Epic Games, RealityScan is part of their vision to democratize 3D content creation and make it accessible to everyone. The app is also compatible with Unreal Engine’s MetaHuman Creator, a tool that allows users to create realistic digital humans in minutes.

Availability and Compatibility of RealityScan App

RealityScan is currently available for Android devices that support ARCore, and it requires at least 4 GB of RAM and 64 GB of storage. Epic Games and Capturing Reality plan to release an iOS version of the app in the future.

What the Developers Say

In a press release, Marc Petit, General Manager of Unreal Engine at Epic Games, said: “We’re thrilled to partner with Capturing Reality to bring RealityScan to the Unreal Engine community. This app is a game-changer for anyone who wants to create high-quality 3D models from photos, whether they are hobbyists, professionals, or students.”

Martin Bujnak, CEO of Capturing Reality, added: “RealityScan is the result of years of research and development in photogrammetry. We’re excited to collaborate with Epic Games and leverage their expertise in 3D graphics and AR technology. We believe that RealityScan will open up new possibilities for 3D content creation and storytelling.”

How to Get Started with RealityScan

If you want to try out RealityScan for yourself, you can download it from the Google Play Store or visit the official website for more information.

Resources:

RealityScan | Free to download 3D scanning app for mobile – Unreal Engine

RealityScan is now available for Android devices! – Unreal Engine

More articles: /news/

MetaHuman Animator: A breakthrough in facial animation

MetaHuman Creator, the groundbreaking tool that allows anyone to create realistic digital humans in minutes, has just released a new feature set that takes facial animation to the next level. MetaHuman Animator is now available for all users of Unreal Engine, the world’s most open and advanced facial animation tool.

A showcase of MetaHuman Animator: Blue Dot

One of the most impressive examples of what MetaHuman Animator can do is Blue Dot, a short film created by Epic Games’ 3Lateral team in collaboration with local Serbian artists, including renowned actor Radivoje Bukvić, who delivers a monologue based on a poem by Mika Antic. The performance was filmed at Take One studio’s mocap stage with cinematographer Ivan Šijak acting as director of photography.

Blue Dot demonstrates the level of fidelity that artists and filmmakers can expect when using MetaHuman Animator with a stereo head-mounted camera system and traditional filmmaking techniques. The team was able to achieve this impressive level of animation quality with minimal interventions on top of MetaHuman Animator results. You can watch the short film here:

Blue Dot: A 3Lateral Showcase of MetaHuman Animator

How does it work?

MetaHuman Animator enables you to capture an actor’s performance using an iPhone or a stereo head-mounted camera system (HMC) and apply it as high-fidelity facial animation on any MetaHuman character, without the need for manual intervention . With it, you can capture the individuality, realism, and fidelity of your actor’s performance, and transfer every detail and nuance onto any MetaHuman.

MetaHuman Animator is designed to be easy to use, fast, and accurate. You can record your performance using the Live Link Face app on your iPhone, or use a compatible HMC system such as Faceware or Dynamixyz. Then, you can stream the data directly into Unreal Engine via Live Link, and see your MetaHuman character come to life in real time. You can also record the data and edit it later using Sequencer, Unreal Engine’s cinematic editing tool.

MetaHuman Animator Now Available

What can you do with it?

MetaHuman Animator is not only a powerful tool for creating realistic facial animation, but also a flexible and creative one. You can mix and match different MetaHumans and performances, and even blend them with other animation sources such as motion capture or keyframes. You can also adjust the intensity and timing of the facial expressions using curves and sliders. The possibilities are endless.

MetaHuman Animator is a game-changer for anyone who wants to create high-quality digital humans for games, films, TV shows, or any other project that requires realistic facial animation. It is also a great way to experiment with different emotions, personalities, and styles of acting. Whether you are a professional animator, a hobbyist, or a student, MetaHuman Animator will help you bring your vision to life.

Aaron Sims Creative and Ivan Šijak on Using MetaHuman Animator

How to get started?

MetaHuman Animator is now available for free for all Unreal Engine users. To get started, you need to download Unreal Engine 5 Early Access or Unreal Engine 4.27 Preview from the Epic Games Launcher, and sign up for MetaHuman Creator at https://www.unrealengine.com/en-US/metahuman-creator.

You can also find more information and tutorials on the Unreal Engine website .

How to Use MetaHuman Animator in Unreal Engine

Resources:

More articles: /news/

MetaHuman Animator facial animation

MetaHuman Animator facial animation

Initial Internal Indie Startup System

Introducing Salama Production: A Pre-Seed Indie Startup with a Vision to Crowdsource an Amazing Game Project

As we enter the third quarter of 2023, I’m excited to share some great news with you about our Internal Indie Startup System. Salama Production is a startup that is currently at the pre-seed stage, which involves validating the idea for our game project. We have a vision to create an amazing game that will delight and inspire gamers around the world.

To achieve this vision, we need your help. That’s why we have created an internal system that will boost our collaboration and efficiency. This system will allow us to:

  • Organize our work, form a team, and welcome new indie devs
  • Outline how we operate at this pre-seed stage and how we structure our indie startup
  • Maintain order when new indie devs join us in the future
  • Achieve our goals and vision for our indie startup

I’m happy to announce that this system is ready and has been implemented. This means that we can now crowdsource our indie game project through a crowdsourcing program and offer the opportunity to indie devs who are interested in our game project and share our vision to join and work with us.

This is a unique chance to be part of something amazing and make your mark on the gaming industry.

“Salama Production is more than just a startup. It’s a community of passionate and talented indie devs who share a common vision and love for gaming. I’m thrilled to be part of this team and contribute to this game project.” – Hisham Al-Kubeyyer, Producer and Voice Actor

More details about our system and our crowdsourcing program will be revealed soon, so stay tuned for more updates and exciting things to come. Thank you for your support and enthusiasm.

If you are a passionate and talented indie dev who shares our vision and wants to join our team early before we make the crowdsourcing program announcement, please subscribe with your email to our waiting list below and we will get back to you as soon as possible.

Don’t miss this opportunity to join Salama Production and be part of the next big thing in gaming. Subscribe now and let’s make history together!

Subscribe to our waiting list

More articles: /news/

Internal Indie Startup System

Internal Indie Startup System

NVIDIA ACE for Games

NVIDIA Kairos Demo Shows the Future of NPCs with Generative AI

NVIDIA has unveiled a stunning demo that showcases how generative AI can bring life and intelligence to virtual characters in games through NVIDIA ACE for games. The demo, called NVIDIA Kairos, features Jin, an NPC who runs a ramen shop and interacts with players using natural language and realistic facial expressions.

NVIDIA ACE for Games Sparks Life Into Virtual Characters With Generative AI

NVIDIA Kairos was created using NVIDIA Avatar Cloud Engine (ACE) for Games, a new service that allows developers to build and deploy customized speech, conversation, and animation AI models for NPCs. NVIDIA ACE for Games leverages NVIDIA’s expertise in AI and game development to provide optimized AI foundation models, such as:

  • NVIDIA NeMo, which enables developers to build and customize large language models that can reflect the character’s personality, backstory, and context. Developers can also use NeMo Guardrails to prevent counterproductive or unsafe conversations with NPCs.
  • NVIDIA Riva, which provides automatic speech recognition and text-to-speech features to enable live speech conversation with NPCs in any language.
  • NVIDIA Omniverse Audio2Face, which generates expressive facial animation for NPCs from just an audio source. Audio2Face supports Unreal Engine 5 and MetaHuman characters.

NVIDIA collaborated with Convai, an NVIDIA Inception startup that specializes in conversational AI for virtual worlds, to integrate NVIDIA ACE for Games modules into their platform. Convai’s platform enables developers to create and deploy AI characters in games and virtual worlds with ease.

“Generative AI has the potential to revolutionize the interactivity players can have with game characters and dramatically increase immersion in games,” said John Spitzer, vice president of developer and performance technology at NVIDIA. “Building on our expertise in AI and decades of experience working with game developers, NVIDIA is spearheading the use of generative AI in games.”

NVIDIA Kairos was rendered in Unreal Engine 5 using the latest ray-tracing features and NVIDIA DLSS. The demo was announced at COMPUTEX 2023, along with other innovations such as NV ACE for Games, Omniverse, and RTX. For more information, visit the NVIDIA website.

NVIDIA ACE for games

NVIDIA ACE for games

References:

NVIDIA Omniverse ACE

NV ACE For Games – Spark Life Into Virtual Characters With Generative AI

Generative AI Sparks Life into Virtual Characters with NV ACE for Games

NV ACE for Games Sparks Life Into Virtual Characters With Generative AI

More articles: /news/

NVIDIA ACE for games

Off The Grid: Bad Dream Enters Development Stage

Salama Productions Announces the Start of Development for Off The Grid: Bad Dream

Hello, this is Mohamed Fawzy, the owner and director of Salama Productions. I’m thrilled to announce that we have started developing our first game project: Off The Grid: Bad Dream.

Off The Grid: Bad Dream is an episodic adventure video game series that uses cinematic aesthetics for the storytelling of a unique psychological thriller experience. The game follows the story of a survivor of a car accident that turns his whole life upside down, where his life becomes like a very strange dream yet so real on the edge between fiction and reality.

We started working on this game on May 6, 2023, and we have been developing one of the game systems features and a very small initial part of the original story and character writing. We have also set the overall idea of the game, but we are still experimenting with a lot of things and trying to find the best way to bring our vision to life. As indie devs, we like experimenting with new ideas that go beyond the boundaries of game genres and constants.

We want to create a game that will do just that. A game that will keep you on the edge of your seat while you discover the story of a survivor of a car accident that turns his whole life upside down. A game that will make you question what is real and what is not. A game that will take you on a journey between fiction and reality.

Psychological thrillers are one of the most popular and profitable genres in gaming, as well as in movies, podcasts, and TV shows. According to PwC, the global gaming industry is expected to be worth $321 billion by 2026, and psychological thrillers are among the top drivers of this growth. Gamers love to immerse themselves in stories that challenge their minds, emotions, and perceptions.

We plan to add a dev log section to our website soon, where we will share our progress and updates on the game development. we want you to be part of our journey and join us in creating this amazing game. That’s why we invite you to follow us on our social media channels and subscribe to our newsletter to stay tuned for more news and updates on our game development.

The first episode of Off The Grid: Bad Dream is in development for Microsoft Windows and will be released when we are satisfied with the result and it’s ready. We are currently in the pre-seed stage of our development, which involves validating the idea for our game project. We are looking for ways to fund this project, such as crowdfunding, grants, or investors. We will announce the release date when we have more clarity and confidence in our timeline. For more information about the game, visit our website Here or check out our pages on IMDb, Giant Bomb, and Game Jolt.

Follow us on Facebook
Follow us on Twitter
Follow us on Instagram
Follow us on YouTube
Follow us on LinkedIn

Thank you for your support and enthusiasm. We can’t wait to share more with you soon. Let’s make history together!


Sources:

1- Gaming boomed in lockdown and market value will reach $320bn | World ….

2- Market Trends: Is the Mystery, Suspense, Thriller Genre Alive and Well ….

3- Category:Psychological thriller video games – Wikipedia.

Google Bard AI Chat Launch

Google Launches Bard, a New AI Experiment that Lets You Chat with LaMDA

Google Bard AI chat, formerly known as LaMDA AI, which was a private research project inside Google and not available to the public, is now available to request access. Bard is a more powerful and versatile language model than GPT when it comes to writing code. It has many other advantages, such as access to more information, better understanding of context, and more creativity. It is still under development, but it has the potential to revolutionize the way programmers write code. We have already requested access and were lucky to be accepted instantly, giving us the chance to try Bard early.

Google has launched a new AI experiment called Bard, which lets users chat with LaMDA, the company’s language model for dialogue applications.

What is LaMDA?

LaMDA is a deep learning model that can generate natural language responses for open-ended conversations. It was introduced by Google in 2021 as a way to make information more accessible and engaging for users.

LaMDA is trained on a large corpus of text from various sources, such as books, web pages, and social media posts. It can handle different topics, tones, and contexts, and it can adapt to the user’s preferences and goals.

LaMDA is also designed to be safe and aligned with Google’s principles for responsible AI. It has mechanisms to avoid generating harmful or misleading content, such as filters, feedback loops, and human oversight.

How does Bard work?

Bard is a direct interface to LaMDA that allows users to interact with it using natural language queries and commands. Users can sign up to try Bard at bard.google.com and start chatting with LaMDA on various topics and tasks.

Bard can help users with productivity, creativity, and curiosity. For example, users can ask Bard to write a poem, summarize an article, generate a logo, or find the best deals for a product. Bard can also answer questions, explain concepts, or spark ideas.

Bard often provides multiple drafts of its response so users can pick the best starting point for them. Users can also ask follow-up questions or request alternatives from Bard. Bard learns from user feedback and behavior to improve its responses over time.

Why is Bard important?

Bard is an early experiment that showcases the potential of conversational AI to enhance human capabilities and experiences. It also demonstrates the collaboration between Google and its partners, such as OpenAI, which developed GPT-4, another large language model that powers Bing.

Bard is currently available as a preview for users in the U.S. and the U.K., and Google plans to expand it to more countries and languages in the future. Google also welcomes feedback from users and experts to improve Bard and make it more useful and trustworthy.

Bard is a remarkable example of how AI can make information more accessible and engaging for users. It also challenges the traditional search engine model and embraces a more natural and interactive way of finding and creating content.


Resources:

https://bard.google.com/
https://blog.google/technology/ai/try-bard/
https://www.zdnet.com/article/how-to-use-google-bard-now/
https://www.wizcase.com/download/google-bard/

https://www.theverge.com/2023/4/21/23692517/google-ai-bard-chatbot-code-support-functions-google-sheet

More articles: /news/

Google Bard AI chat

Google Bard AI chat

New Bing is Powered by GPT-4

The New Bing is Powered by GPT-4, the Latest AI Breakthrough from OpenAI

Bing, the search engine from Microsoft, has undergone a major upgrade with the help of GPT-4, the latest and most advanced artificial intelligence system from OpenAI.

What is GPT-4?

GPT-4, which was announced by OpenAI on March 14, 2023, is a deep learning model that can generate natural language responses for a variety of tasks, such as answering questions, chatting, writing, and creating. It is based on a massive amount of data and computation, and it can learn from human feedback and real-world use.

GPT-4 is the successor of GPT-3, which was released in 2020 and was widely considered as a breakthrough in natural language processing. GPT-4 surpasses GPT-3 in its broader general knowledge, problem solving abilities, creativity, collaboration, and visual input.

How does Bing use GPT-4?

Microsoft confirmed that the new Bing is running on GPT-4, which it has customized for search. The new Bing allows users to search, answer, chat, and create at Bing.com, using natural language queries and commands. For example, users can ask Bing to write a poem, summarize an article, generate a logo, or find the best deals for a product.

The new Bing also benefits from the continuous improvements that OpenAI makes to GPT-4 and beyond. According to OpenAI, GPT-4 is safer and more aligned than its previous versions, and it can produce more accurate and factual responses. It also has more creativity and collaboration capabilities, and it can handle visual input and longer context.

How can I try the new Bing?

The new Bing is currently available as a preview for users who sign up at Bing.com. Microsoft said that it will update and improve the new Bing based on community feedback and user behavior.

The new Bing is a remarkable example of how AI can enhance human capabilities and experiences. It also showcases the collaboration between Microsoft and OpenAI, which have partnered since 2019 to accelerate the development and adoption of AI technologies.

new Bing GPT-4

new Bing GPT-4

new Bing GPT-4


Resources:

GPT-4 – OpenAI

GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.

Confirmed: the new Bing runs on OpenAI’s GPT-4 | Bing Search Blog

Microsoft invests $1 billion in OpenAI to pursue artificial general intelligence – The Verge

More articles: /news/

Exit mobile version