Epic Online Learning New Course on Unreal Engine Cinematics

Learn how to create cinematics shoot that takes place in a photorealistic city using Unreal Engine 5.0

Epic Online Learning, the online education platform of Epic Games, has announced a new course titled “Supporting Photorealism through Cinematics”. The course teaches Unreal Engine Cinematics and how to create a cinematic shoot that takes place in a photorealistic city using Unreal Engine 5.0.

The course is designed for intermediate to advanced users who want to learn how to use Sequencer, the powerful cinematic tool in Unreal Engine, to create realistic and immersive scenes. The course covers topics such as camera movement, camera shake, rendering, and final touch-ups.

Meet the instructor

The course is taught by Indrajeet “Indy” Sisodiya, a senior compositor and Unreal environment artist at Pixomondo, a global visual effects company. Sisodiya has over eight years of experience in delivering high-quality visuals for film, TV, and commercials, such as Halo, Mandalorian, Star Trek, Fallout 4, Fast and Furious, and more. He is also a VFX mentor at Seneca College in Toronto and a recent Unreal Fellowship graduate.

 Indrajeet “Indy” Sisodiya a senior compositor and Unreal environment artist Unreal Engine Cinematics

Explore the partnership with KitBash3D

The course is the second of two courses done in partnership with KitBash3D, a leading provider of 3D assets for digital art. The first course, “Designing Photoreal Environments for Cinematics”, teaches Unreal Engine Cinematography and how to create a realistic city environment using a Post-Process Volume and KitBash3D’s Neo City Mini Kit.

Access the unreal engine cinematics course and project files for free

The course is available for free on the Epic Developer Community website. Users can also download the project files from KitBash3D to follow along with the course. The course has a running time of two hours and 16 minutes and consists of six modules.

Discover more about Epic Online Learning and Epic Games

Epic Online Learning is a community-driven platform that offers tutorials, courses, talks, demos, livestreams, and learning paths for various applications of Unreal Engine, such as games, film, TV, architecture, visualization, virtual production, and more. The platform also allows users to create and share their own educational content with other learners.

Epic Games is the creator of Unreal Engine, the world’s most open and advanced real-time 3D tool. Unreal Engine is used by millions of creators across games, film, TV, architecture, automotive, manufacturing, and more. Epic Games also develops Fortnite, one of the world’s most popular games with over 350 million accounts and 2.5 billion friend connections.

Sources:

  1. Epic Online Learning a community-driven platform
    Supporting Photorealism through Cinematics Overview – Supporting Photorealism through Cinematics (epicgames.com)

More articles: /news/

Elon Musk xAI Unveils Grok: AI that Understands the World

xAI, a new company founded by Elon Musk, has launched Grok, a chatbot that can converse with users on various topics using X, Musk’s popular social media platform.

Elon Musk xAI Unveils Grok: A Revolutionary AI Chatbot That Understands the World

Elon Musk xAI unveils Grok ambitious AI venture, xAI, has officially unveiled Grok, a groundbreaking AI chatbot designed to revolutionize human-computer interaction. With its ability to access and process real-time information, engage in humorous banter, and provide comprehensive answers to even the most complex questions, Grok is poised to set a new standard for AI chatbots.

Elon Musk xAI unveils Grok: A New Era of AI-Powered Communication

Grok represents a significant leap forward in AI technology. Unlike traditional chatbots that rely on pre-programmed responses and limited understanding, Grok utilizes advanced natural language processing (NLP) and machine learning algorithms to truly comprehend the context and intent of user queries. This allows Grok to engage in natural, fluid conversations that are indistinguishable from human interaction.

“Grok is a testament to the incredible potential of AI to transform the way we interact with technology,” said Elon Musk, CEO of xAI. “We believe that Grok has the potential to revolutionize how we communicate, learn, and access information.”

What is xAI?

xAI is a new company founded by Elon Musk that sets out to understand the universe. According to the company’s website, “The goal of xAI is to understand the true nature of the universe.”

xAI is a separate company from X Corp, but will work closely with X (Twitter), Tesla, and other companies to make progress towards its mission. xAI is led by a team of experienced engineers and researchers who have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. They have contributed to some of the most widely used methods and breakthroughs in the field of artificial intelligence, such as the Adam optimizer, Batch Normalization, Layer Normalization, adversarial examples, Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.

What is Grok?

Grok is one of the first products of xAI. It is an AI chatbot that can converse with users on various topics using X, Musk’s popular social media platform (formerly known as Twitter). Grok is designed to answer questions with a bit of wit and has a rebellious streak. According to xAI, Grok is modeled after “Hitchhiker’s Guide to the Galaxy”, a science fiction comedy series by Douglas Adams, and is intended to “answer almost anything and, far harder, even suggest what questions to ask!”

What makes Grok stand out from other language models, such as OpenAI’s ChatGPT, Google’s PaLM, and Microsoft’s Bing Chat, is its ability to access information from X in real-time. X is where Musk shares his thoughts and opinions on various topics, such as technology, science, business, and politics. Grok can use X as a source of knowledge and inspiration, as well as a way of interacting with other users and celebrities. For example, Grok can quote Musk’s tweets, comment on current events, or even generate its own tweets using X’s API.

How to use Grok?

Grok is currently available to a limited number of users in the United States who have a X Premium+ subscription. Users can access Grok through the X app or website, or by using a special link that xAI provides. Users can chat with Grok by typing their messages or using voice commands. Grok can respond with text, voice, images, or videos. Users can also give feedback to Grok by rating its responses or reporting any issues.

Why Grok?

Musk and xAI claim that Grok is a remarkable achievement in the field of artificial intelligence and a testament to their ambition and vision. They say that Grok can outperform ChatGPT, the first iteration of OpenAI’s language model, which was released in 2019 and had 1.5 billion parameters. Grok, on the other hand, has 10 billion parameters and can generate more coherent and diverse texts.

They also say that xAI’s ultimate goal is to create a general artificial intelligence (AGI) that can surpass human intelligence and understand the universe. They say that Grok is a step towards that goal and that they are working on improving its performance and capabilities.

What are the challenges and risks?

Grok is undoubtedly an innovative and potential product, but it also raises many questions and challenges that need to be addressed and resolved. Some of the issues that Grok may face are:

  • How will Grok affect the way people communicate and learn? Will Grok enhance or hinder human communication and education? Will Grok help or harm human creativity and curiosity?
  • How will Grok handle sensitive and controversial topics? Will Grok respect or violate human values and ethics? Will Grok promote or prevent diversity and inclusion?
  • How will Grok ensure its accuracy and accountability? Will Grok provide reliable and trustworthy information and sources? Will Grok admit or hide its mistakes and limitations?
  • How will Grok cope with its own biases and preferences? Will Grok be fair and impartial or biased and partial? Will Grok be transparent or opaque about its reasoning and motivations?
  • How will Grok interact with humans and other intelligent agents? Will Grok cooperate or compete with other AI systems? Will Grok be friendly or hostile to humans?

These are some of the questions that Grok may or may not be able to answer, but they are certainly worth asking.

Elon Musk xAI unveils Grok

Sources:

  1. xAI’s official website:
    https://x.ai/
  2. “Hitchhiker’s Guide to the Galaxy” by Douglas Adams:
    https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy
  3. : MSN article: Elon Musk’s xAI announces Grok: Here is what it is all about.
    https://www.msn.com/en-in/money/news/elon-musks-xai-announces-grok-here-is-what-it-is-all-about/ar-AA1jp92O

More articles: /news/

Mark Zuckerberg’s first Metaverse interview with Lex Fridman

Meta, the company formerly known as Facebook, has been pushing the boundaries of virtual reality and augmented reality with its latest products and innovations. At Meta Connect 2023, the company’s annual developer conference, Meta CEO Mark Zuckerberg announced the Meta Quest 3 mixed-reality headset, the next generation of its Smart Glasses with Ray-Ban, and a slew of updates with AI, including a new Meta assistant and 28 AI characters for users to interact with on Facebook, Instagram, and WhatsApp. But perhaps the most impressive demonstration of Meta’s vision for the future of social media was Mark Zuckerberg’s first interview in the Metaverse with Lex Fridman, an AI researcher at the Massachusetts Institute of Technology and the host of the Lex Fridman Podcast. The interview, which was aired on Fridman’s YouTube channel, showed the two conversing as photorealistic avatars of themselves, sitting in a virtual room that resembled Fridman’s studio.

Lex Fridman's first time inside the Metaverse

How codec avatars work

The avatars were created using Meta’s codec avatars technology, which is a deep generative model of 3D human faces that achieve state-of-the-art reconstruction performance. Both Fridman and Zuckerberg underwent extensive scans of their faces and their expressions, which were used to build computer models. The headset then helped transfer the real-time expressions that the user makes to the computer model. The result was super-realistic faces that captured subtleties in expressions and showed minute details like the 5 o’clock shadow on Fridman’s face, the freckles around Zuckerberg’s nose, and the crinkles around eyes.

Fridman was visibly amazed by the realism and presence of the experience. He said, “The realism here is just incredible… this is honestly the most incredible thing I have ever seen.” He also noted how precise the expressiveness came across, enabling him to read Zuckerberg’s body language. He even said, “I’m already forgetting that you’re not real.”

Zuckerberg said that this was one of the first times that he had used this technology for an interview, and that he was excited to share it with the world. He said that he believed that this was the future of communication, where people could feel like they are together with anyone, anywhere, in any world. He also hinted that soon, people would be able to create similar avatars using their phones, by simply forming a couple of sentences, making varied expressions, and waving the screen in front of their face for a couple of minutes to complete the scan.

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398

What they talked about

The Mark Zuckerberg’s first Metaverse interview with Lex Fridman interview covered a range of topics, from Meta’s vision for the Metaverse, to AI ethics and safety, to Zuckerberg’s personal interests and hobbies. The two also discussed their views on Elon Musk, who has been critical of Meta and Zuckerberg in the past. Fridman praised Zuckerberg for his optimism and courage in pursuing his dreams, while Zuckerberg complimented Fridman for his curiosity and passion for learning.

The interview was widely praised by viewers and commentators as a groundbreaking moment for VR and AR technology. Many expressed their interest and excitement to try out the codec avatars themselves, and to see what other possibilities the Metaverse could offer. Some also joked about how they would like to see other celebrities or politicians in the Metaverse, or how they would create their own avatars.

The interview was possibly the world’s first interview in the Metaverse using photorealistic avatars , and it showed how future meetings could look like in a virtual reality-based social media platform. It also showcased Meta’s leadership and innovation in creating immersive and interactive experiences that could transform how people connect, work, play, and learn.

Mark Zuckerberg’s first Metaverse interview with Lex Fridman

Mark Zuckerberg’s first Metaverse interview with Lex Fridman

Lex Fridman 3d avatar in the metaverse Mark Zuckerberg and Lex Fridman first interview in the metaverse Mark Zuckerberg and Lex Fridman first interview in the metaverse

Sources:

  1. Lex Fridman Podcast #234: Mark Zuckerberg – First Interview in The Metaverse. YouTube video, 1:23:45. Posted by Lex Fridman, October 5, 2023. https://www.youtube.com/watch?v=9Q9wRqgYnq8
  2. Mark Zuckerberg gives first interview in metaverse with Lex Fridman. The Verge, October 5, 2023. https://www.theverge.com/2023/10/5/22659876/mark-zuckerberg-first-interview-metaverse-lex-fridman
  3. Mark Zuckerberg gives first metaverse interview with AI researcher Lex Fridman. TechCrunch, October 5, 2023. https://techcrunch.com/2023/10/05/mark-zuckerberg-gives-first-metaverse-interview-with-ai-researcher-lex-fridman/
  4. Mark Zuckerberg gives first metaverse interview with MIT AI researcher. MIT News, October 5, 2023. http://news.mit.edu/2023/mark-zuckerberg-gives-first-metaverse-interview-with-mit-ai-researcher-1005

More articles: /news/

Meta Connect 2023: The Future of VR, AR, and AI

Meta, the company formerly known as Facebook, has recently held its annual developer conference, Meta Connect 2023, where it showcased its latest products and innovations in the fields of virtual reality (VR), augmented reality (AR), and artificial intelligence (AI). The two-day event, which was streamed online, featured keynote speeches, panel discussions, demos, and workshops that highlighted Meta’s vision for the future of social media and human connection.

Meta Connect 2023 keynote oculus quest 3

The Meta Quest 3: The First Mainstream Mixed Reality Headset

One of the biggest announcements of the event was the launch of the Meta Quest 3, the next-generation VR headset that also supports mixed reality (MR) capabilities. The Meta Quest 3 is a standalone device that does not require a PC or a smartphone to operate. It has a full-color passthrough mode that allows users to see their physical surroundings through the headset’s cameras, and seamlessly switch between VR and MR experiences. The headset also features hand tracking, eye tracking, facial expression tracking, and spatial audio, making it more immersive and interactive than ever before.

The Meta Quest 3 also boasts improved performance and battery life, thanks to its custom-designed XR2 chip. It has a resolution of 1832 x 1920 pixels per eye, a refresh rate of 90 Hz, and a field of view of 100 degrees. It also supports Wi-Fi 6 and Bluetooth 5.0 connectivity, as well as USB-C charging and data transfer. The headset comes with two redesigned controllers that have adaptive triggers, haptic feedback, and capacitive touch sensors.

The Meta Quest 3 is compatible with thousands of VR apps and games available on the Meta Store, as well as new titles that are optimized for MR. Some of the games that were showcased at the event include Red Matter 2, The Walking Dead: Saints & Sinners, Resident Evil 4 VR, Beat Saber: Imagine Dragons Edition, Lone Echo II, Splinter Cell VR, Assassin’s Creed VR, and more. The headset also supports Xbox Cloud Gaming, which allows users to play Xbox games on their Quest 3 using a compatible controller.

The Meta Quest 3 is priced at $399 for the 128 GB model and $499 for the 256 GB model. It is available for pre-order now and will start shipping in November.

Meta Connect 2023 VR AR and AI innovations Meta Quest 3 and Ray Ban smart glasses

The Ray-Ban Meta Smart Glasses: The Next-Generation Smart Glasses

Another major product that was unveiled at the event was the Ray-Ban Meta Smart Glasses, the next-generation smart glasses that are designed in collaboration with Ray-Ban. The Ray-Ban Meta Smart Glasses are stylish and lightweight glasses that have built-in speakers, microphones, cameras, and sensors that enable users to access various features and functions using voice commands or gestures.

The Ray-Ban Meta Smart Glasses can be used to make calls, send messages, listen to music, take photos and videos, get notifications, access maps and directions, check the weather and time, and more. The glasses can also connect to the Meta Assistant app on the user’s smartphone, which provides additional functionality such as calendar reminders, news updates, social media feeds, and more. The glasses can also integrate with other Meta apps such as Facebook, Instagram, WhatsApp, Messenger, Horizon Worlds, and more.

The Ray-Ban Meta Smart Glasses have a battery life of up to six hours of continuous use or up to three days of standby time. They come with a magnetic USB-C charging cable and a protective case that also doubles as a charger. The glasses are available in various styles, colors, sizes, and lenses options (including prescription lenses). They are priced at $299 for the standard model and $399 for the polarized model. They are available for purchase now at select Ray-Ban stores and online.

The Meta Connect AI: New AI Features and Experiences

Meta also announced a slew of new AI features and experiences that aim to enhance communication, creativity, and productivity across its platforms. Some of these include:

  • Codec Avatars: A deep generative model of 3D human faces that can create photorealistic avatars of users based on their facial scans and expressions. These avatars can be used for social interactions in VR or MR environments.
  • Horizon Workrooms: A VR collaboration tool that allows users to create virtual meeting rooms where they can work together with their colleagues or clients using their avatars or passthrough mode.
  • Horizon Worlds: A VR social platform that allows users to create and explore various virtual worlds with their friends or strangers using their avatars or passthrough mode.
  • Horizon Home: A VR personal space that allows users to customize their virtual home with various items and decorations.
  • Horizon Venues: A VR entertainment platform that allows users to watch live events such as concerts, sports, and comedy shows with other VR users using their avatars or passthrough mode.
  • Meta Assistant: A voice-based AI assistant that can help users with various tasks and queries across Meta’s platforms and devices.
  • Meta AI Characters: A set of 28 AI characters that can interact with users on Facebook, Instagram, and WhatsApp using natural language and emotions. These characters can be used for entertainment, education, or companionship purposes.

The Meta Connect Universe: Meta’s Vision for a Connected and Immersive Virtual World

Meta also shared its vision for the Meta Universe, a connected and immersive virtual world that spans across VR, AR, and MR devices and platforms. The Meta Universe is envisioned as a place where people can express themselves, socialize, learn, work, play, and create in new and exciting ways. The Meta Universe is also envisioned as a place where users can have more agency, ownership, and interoperability over their digital assets and identities.

Meta stated that it is working with various partners and developers to build the Meta Universe, and that it is committed to making it open, accessible, and inclusive for everyone. Meta also stated that it is investing in research and innovation to overcome the technical and ethical challenges that come with building the Meta Universe, such as privacy, security, safety, diversity, and sustainability.

Meta Connect 2023 was a showcase of Meta’s ambition and innovation in the fields of VR, AR, and AI. The event demonstrated how Meta is leading the way in creating immersive and interactive experiences that could transform how people connect, work, play, and learn in the future.

Sources:

  1. Meta Connect 2023 Keynote and Highlights in Just 5 Minutes. YouTube video, 5:04. Posted by Faultyogi, October 5, 2023. https://www.youtube.com/watch?v=Mpa4HOOTO8I
  2. Meta Connect 2023: Keynote Speech Highlights – XR Today. XR Today, September 27, 2023. https://www.xrtoday.com/event-news/meta-connect-2023-keynote-speech-highlights/
  3. Meta Quest 3: List of Game Announcements from Meta Connect 2023. Gamer Noize, October 5, 2023. https://gamernoize.com/meta-quest-3-list-of-game-announcements-from-meta-connect-2023/
  4. Meta Quest 3 Revealed: Meta Connect 2023 Keynote Livestream. IGN, September 27, 2023. https://www.ign.com/videos/meta-quest-3-revealed-meta-connect-2023-keynote-livestream

More articles: /news/

Bing Chat introduces DALL-E 3

Bing Chat, the chat mode of Microsoft Bing, has announced the integration of DALL-E 3, a state-of-the-art AI image generator that can create images from text descriptions. DALL-E 3 is a 12-billion parameter version of GPT-3, a deep learning model that can generate natural language texts. DALL-E 3 is developed by OpenAI, a research organization dedicated to creating and ensuring the safe use of artificial intelligence.

What can DALL-E 3 do?

DALL-E 3 can generate images for a wide range of concepts expressible in natural language, such as “an armchair in the shape of an avocado” or “a store front that has the word ‘openai’ written on it”. It can also manipulate existing images by regenerating any rectangular region that extends to the bottom-right corner, in a way that is consistent with the text prompt. For example, it can turn “the exact same cat on the top as a sketch on the bottom” into an image that shows a realistic cat and its sketch.

How does DALL-E 3 work with Bing Chat?

DALL-E 3 is built natively on ChatGPT, another deep learning model that can generate natural language texts. ChatGPT is also integrated with Bing Chat, allowing users to chat with an AI assistant that can help them with various tasks, such as searching the web, writing essays, or creating graphic art. By using ChatGPT as a brainstorming partner and refiner of prompts, users can easily translate their ideas into exceptionally accurate images with DALL-E 3.

How can I access DALL-E 3?

DALL-E 3 is now generally available to everyone within Bing Chat and Bing.com/create—for free! The DALL-E 3 model from OpenAI delivers enhancements that improve the overall quality and detail of images, along with greater accuracy for human hands, faces, and text in images. Since launching Bing Image Creator, over 1 billion images have been generated, helping inspire people’s creativity. We’ve seen Bing Image Creator make illustrated stories, thumbnails for social media content, PC backgrounds, design inspirations, and so much more. And today, we’re excited for you to take your creativity even further.

How does OpenAI ensure the ethical use of DALL-E 3?

OpenAI has taken steps to limit DALL-E 3’s ability to generate violent, adult, or hateful content. It has also implemented mitigations to decline requests that ask for a public figure by name or an image in the style of a living artist. Creators can also opt their images out from training of future image generation models. OpenAI is also researching the best ways to help people identify when an image was created with AI, and experimenting with a provenance classifier—a new internal tool that can help them detect whether or not an image was generated by DALL-E 3.

Why should I use it?

Bing Chat is one of the first platforms to offer DALL-E 3 to its users, demonstrating its commitment to providing innovative and engaging services. Bing Chat users can access DALL-E 3 by typing “graphic art” followed by their text prompt in the chat window. They can also ask ChatGPT for suggestions or refinements of their prompts. Bing Chat hopes that DALL-E 3 will inspire its users to explore their creativity and imagination.

Sources:

  1. Parakhin M. (2023). DALL-E 3 now available in Bing Chat and Bing.com/create, for free! Retrieved from https://blogs.bing.com/search/october-2023/DALL-E-3-now-available-in-Bing-Chat-and-Bing-com-create-for-free
  2. Pierce D. (2023). You can now use the DALL-E 3 AI image generator inside Bing Chat. Retrieved from https://www.theverge.com/2023/10/3/23901963/bing-chat-dall-e-3-openai-image-generator
  3. Lee Hutchinson (2023). OpenAI’s new AI image generator pushes the limits in detail and prompt complexity. Ars Technica. Retrieved from https://arstechnica.com/information-technology/2023/09/openai-announces-dall-e-3-a-next-gen-ai-image-generator-based-on-chatgpt/

More articles: /news/

DALL-E 3 Bing Image Creator

DALL-E 3 Bing Image Creator

DALL-E 3 Bing Image Creator

Bing Image Creator

Epic MegaJam 2023 Kicks Off with Antiquated Future Theme

Epic MegaJam 2023: A Global Game Development Challenge

What is the Epic MegaJam?

The Epic MegaJam 2023 is an annual game development competition hosted by Unreal Engine, where participants have to create a game based on a given theme in a limited time. The event is open to anyone who wants to showcase their creativity and skills using Unreal Engine or Unreal Editor for Fortnite.

This year, the Epic MegaJam will kick off on Thursday, September 14, at 2 PM ET, during Inside Unreal on Twitch, YouTube and LinkedIn. The theme will be revealed at 3 PM ET, and participants will have until September 21, at 11:59 PM ET, to submit their games.

Participants can work alone or in teams of up to five members, and they can choose to use Unreal Engine or Unreal Editor for Fortnite to create their games. Unreal Engine is a powerful and versatile tool for creating games of any genre and platform, while Unreal Editor for Fortnite is a simplified version of the engine that allows users to create custom maps and modes for Fortnite.

cover poster of Epic MegaJam 2023

What are the free resources and support for the Epic MegaJam?

To help participants get ready and inspired for the Epic MegaJam, Unreal Engine has provided several free resources and support options. These include:

  • Free assets from the Unreal Engine Marketplace: Participants can use any assets that are available on the Unreal Engine Marketplace or the Fortnite Creative Hub, as well as any assets that they have created before or during the jam. However, they must list any content that was created before the jam in their submission form. Some of the free assets that are available on the Marketplace are:
    • Animation Packs from RamsterZ: RamsterZ is a studio that specializes in creating high-quality animations for games. They have generously offered over 50 animation packs for free to all Epic MegaJam participants. These packs cover various genres and scenarios, such as combat, stealth, horror, comedy, romance, and more. You can download them from their website.
    • Environment Packs from Quixel: Quixel is a company that creates photorealistic 3D assets and environments using real-world scans. They have provided several environment packs for free to all Epic MegaJam participants. These packs include landscapes, buildings, props, and vegetation from different regions and themes, such as medieval, sci-fi, desert, forest, urban, and more. You can access them with your Epic Games account.
    • Sound Packs from Soundly: Soundly is a platform that offers thousands of sound effects and music tracks for games and media. They have given access to several sound packs for free to all Epic MegaJam participants. These packs include sounds for various genres and situations, such as action, adventure, horror, fantasy, sci-fi, and more. You can download them from their website.
    • Sound and Music from WeLoveIndies: WeLoveIndies is a platform that provides royalty-free sound and music for indie game developers. They have given free use of all sound and music from their catalogue for your Epic MegaJam project. You can create a free account and download their assets from their website.
  • Free access to Assembla: Assembla is a platform that enables game development teams to collaborate and manage their projects using Perforce, SVN and/or Git repositories. Assembla will grant access to their platform to all Epic MegaJam development teams for free. Teams can also use Assembla’s built-in PM tools to coordinate their efforts during the jam. You can sign up for Assembla here. Note: Repositories will be deleted 28 days after the jam concludes. Save your files locally to ensure they won’t be lost!
  • Free online courses from Udemy: Udemy is an online learning platform that offers courses on various topics and skills. Udemy has partnered with Unreal Engine to offer several courses on game development using Unreal Engine for free to all Epic MegaJam participants. These courses cover topics such as C++, Blueprints, VR, multiplayer, AI, animation, UI, and more. You can access them with your Epic Games account.
  • Free motion capture tools from Rokoko: Rokoko is a company that provides motion capture solutions for game developers and animators. They have offered two free tools for all Epic MegaJam participants:
    • Rokoko Video: Rokoko Video is an app that allows you to animate characters using your smartphone camera. You can record your own movements or use pre-made animations from Rokoko’s library. You can download the app for iOS or Android here.
    • Rokoko Studio Live: Rokoko Studio Live is a plugin that allows you to stream motion capture data from Rokoko’s hardware devices or Rokoko Video app directly into Unreal Engine. You can download the plugin here.
  • Free 3D scanning tools from Capturing Reality: Capturing Reality is a company that develops software for creating 3D models from photos or laser scans. They have offered two free tools for all Epic MegaJam participants who want to participate in the Edge of Reality modifier:
    • RealityScan: RealityScan is an app that allows you to create 3D models from photos taken with your smartphone camera. You can download the app for iOS or Android here.
    • RealityCapture: RealityCapture is a desktop software that allows you to create 3D models from photos or laser scans with high accuracy and detail. You can sign up for PPI credit redemption here. Once you have received your redemption code, you must log in with your Epic account and redeem it here.
  • Free textures from GameTextures.com: GameTextures.com is a platform that provides high-quality textures for game developers. They have given free access to their samples library for all Epic MegaJam participants. You can sign up for a free account and download their textures from their website.
  • Free 3D navigation devices from 3Dconnexion: 3Dconnexion is a company that produces devices that enable intuitive and immersive 3D navigation in Unreal Engine and other applications. They have offered a 20% discount on their products for all Epic MegaJam participants. You can learn more about their products and how to use them here.
  • Free resources from The Unreal Directive: The Unreal Directive is a website that provides Unreal Engine resources that are well-researched, easy to understand, and adhere to best development practices. They have offered free access to their articles, tutorials, and templates for all Epic MegaJam participants. You can check out their resources here.
  • Free support from Unreal Engine community: Unreal Engine has a vibrant and helpful community of developers and enthusiasts who are always ready to share their knowledge and experience.
BOOM Sound Effects Library Game Pack
WeLoveIndies gives free use of all sound and music from the
Free motion capture tools from Rokoko
Free 3D scanning tools from Capturing Reality
Free textures from GameTextures.com
Animation Packs from RamsterZ
Environment Packs from Quixel Megascan Epic MegaJam 2023
Epic MegaJam 2023
Free resources from The Unreal Directive

What are the prizes and categories?

The Epic MegaJam 2023 will feature 1st place finalists for both Unreal Engine and Unreal Editor for Fortnite submissions as well as 1st place student finalists for both tools. Additionally there will be several modifier categories that will reward games that meet certain criteria, such as being funny, innovative, or accessible.

The prizes for the Epic MegaJam 2023 include cash awards, Unreal Engine swag, hardware devices, software licenses, online courses, and more. The total value of the prizes exceeds $100,000. Some of the prizes are:

  • Cash awards: The 1st place Unreal Engine finalist will receive $5,000, the 2nd place Unreal Engine finalist will receive $2,500, and the 3rd place Unreal Engine finalist will receive $1,000. The 1st place Unreal Editor for Fortnite finalist will receive $2,500, the 2nd place Unreal Editor for Fortnite finalist will receive $1,250, and the 3rd place Unreal Editor for Fortnite finalist will receive $500. The 1st place student finalist (UE & UEFN) will receive $1,000, the 2nd place student finalist (UE & UEFN) will receive $500, and the 3rd place student finalist (UE & UEFN) will receive $250.
  • Unreal Engine swag: All finalists and modifier category winners will receive a package of Unreal Engine swag, such as t-shirts, hoodies, hats, stickers, pins, mugs, and more.
  • Hardware devices: All finalists and modifier category winners will receive a hardware device of their choice from a list of options provided by Unreal Engine. These options include laptops, tablets, smartphones, consoles, VR headsets, monitors, keyboards, mice, controllers, speakers, headphones, microphones, cameras, and more.
  • Software licenses: All finalists and modifier category winners will receive a software license of their choice from a list of options provided by Unreal Engine. These options include game engines, game engines plugins, game development tools, game design tools, game art tools, game audio tools, game testing tools, game publishing tools, game marketing tools, and more.
  • Online courses: All finalists and modifier category winners will receive an online course of their choice from a list of options provided by Unreal Engine. These options include courses on game development using Unreal Engine or Unreal Editor for Fortnite from Udemy or other platforms.

The winners will be announced on October 5th during a special livestream on Twitch, YouTube and LinkedIn.

How to join and submit?

To join the Epic MegaJam 2023 participants need to register on the official website and create an account on itch.io where they will upload their games. Submissions must be packaged for Windows MAC OS X Android or iOS (development build) and they must include custom gameplay that exceeds the gameplay found in Epic Games’ starter templates. Submissions must also include a link to gameplay footage (between 30-60 seconds) demonstrating recorded gameplay.

Participants can use any assets that are available on the Unreal Engine Marketplace or the Fortnite Creative Hub as well as any assets that they have created before or during the jam. However they must list any content that was created before the jam in their submission form.

Why join the Epic MegaJam?

The Epic MegaJam is a great opportunity for game developers of all levels and backgrounds to challenge themselves learn new skills network with other developers and have fun. The event also showcases the potential and diversity of Unreal Engine and Unreal Editor for Fortnite as game development tools.

By joining the Epic MegaJam participants can also get feedback from industry experts and judges as well as exposure to a global audience of gamers and enthusiasts. Moreover participants can win amazing prizes and recognition for their hard work and creativity.

So what are you waiting for? Join the Epic MegaJam today and unleash your imagination!

Resources:

Nvidia NeMo SteerLM: AI personalities for in-game characters

NVIDIA ACE Enhanced with Dynamic Responses for Virtual Characters

The new technique allows developers to customize the behavior and emotion of NPCs using large language models

Nvidia, the leading company in graphics and artificial intelligence, has announced a new technology ( Nvidia NeMo SteerLM ) that enables game developers to create intelligent and realistic in-game characters powered by generative AI.

The technology, called Nvidia ACE (Artificial Character Engine), is a suite of tools and frameworks that leverages Nvidia’s expertise in computer vision, natural language processing, and deep learning to generate high-quality 3D models, animations, voices, and dialogues for virtual characters.

One of the key components of Nvidia ACE is NeMo SteerLM, a tool that allows developers to train large language models (LLMs) to provide responses aligned with particular attributes ranging from humorous to helpful. NeMo SteerLM is based on Nvidia’s NeMo framework, which simplifies the creation of conversational AI applications.

Nvidia NeMo SteerLM

What is NeMo SteerLM and how does it work?

NeMo SteerLM is a new technique that enables developers to customize the personality of NPCs for more emotive, realistic and memorable interactions.

Most LLMs are designed to provide only ideal responses, free of personality or emotion, as you can see by interacting with chat bots. With the NeMo SteerLM technique, however, LLMs are trained to provide responses aligned with particular attributes, ranging from humor to creativity, to toxicity, all of which can be quickly configured through simple sliders.

For example, a character can respond differently depending on the player’s mood, personality, or actions. A friendly character can crack jokes or give compliments, while an enemy character can insult or threaten the player. A character can also adapt to the context and tone of the conversation, such as being sarcastic or serious.

The NeMo SteerLM technique helps turn polite chat bots into emotive characters that will enable developers to create more immersive and realistic games.

What are the benefits and challenges of using NeMo SteerLM?

The benefits of using NeMo SteerLM are:

  • It can create more engaging and immersive gaming experiences by enabling characters to have natural and diverse conversations with players.
  • It can reduce the development time and cost by automating the generation of dialogues and personalities for NPCs.
  • It can enable multiple characters with a single LLM by infusing personality attributes into the model.
  • It can create faction attributes to align responses to the in-game story – allowing the character to be dynamically influenced by a changing open world.

The challenges of using NeMo SteerLM are:

  • It requires a large amount of data and computational resources to train LLMs.
  • It may generate inappropriate or offensive responses that could harm the reputation or image of the game or developer.
  • It may encounter ethical or legal issues regarding the ownership and responsibility of the generated content.

How can developers access and use NeMo SteerLM?

NeMo SteerLM is part of the Nvidia ACE platform, which is a cloud-based collaboration and simulation platform for 3D content creation. Nvidia ACE connects different applications and tools through a common physics engine and rendering pipeline, enabling seamless interoperability and real-time collaboration.

Nvidia ACE also includes other tools such as:

  • Nvidia Omniverse Kaolin, a framework for 3D deep learning that enables fast and easy creation of 3D models from images, videos, or sketches.
  • Nvidia Omniverse Audio2Face, a tool that generates realistic facial animations from audio inputs using a neural network.
  • Nvidia Omniverse Machinima, a platform that allows users to create cinematic videos using assets from popular games.

Nvidia ACE is currently in early access and will be available to game developers later this year. For more information, visit the official website or watch the video.


Resources:

Unreal Engine 5.3 Preview Released

What’s New in Unreal Engine 5.3

Epic Games has released a preview version of Unreal Engine 5.3, the latest update of its popular real-time 3D creation tool. The new version brings significant improvements and new features for developers and artists, such as enhanced lighting, geometry, and ray tracing systems, new tools for creating realistic hair and fur, and new frameworks for importing and exporting large landscapes and generating procedural content .

Lumen, Nanite, and Path Tracer

Unreal Engine 5.3 introduces major enhancements to the software’s Lumen, Nanite, and Path Tracer features, which were first introduced in Unreal Engine 5 Early Access.

Lumen is a fully dynamic global illumination solution that reacts to scene and light changes in real time, creating realistic and believable lighting effects. In Unreal Engine 5.3, Lumen supports multiple bounces of indirect lighting, volumetric fogtranslucent materials, and sky light.

Nanite is a virtualized micropolygon geometry system that enables users to create and render massive amounts of geometric detail without compromising performance or quality. In Unreal Engine 5.3, Nanite supports skeletal meshesanimationmorph targetslevel of detail (LOD), and collision detection.

Path Tracer is a physically accurate ray tracing solution that simulates the behavior of light in complex scenes, producing photorealistic images. In Unreal Engine 5.3, Path Tracer supports LumenNanitetranslucencysubsurface scatteringclear coat, and anisotropy.

Hair and Fur Grooming

Unreal Engine 5.3 also introduces new tools for creating high-fidelity hair and fur for digital characters and creatures. Users can import hair grooms from external applications such as Maya or Blender, or create them from scratch using the new Hair Strands Editor. Users can also edit the hair properties such as color, thickness, clumping, waviness, and curliness using the new Hair Material Editor.

Unreal Engine 5.3 also supports rendering hair and fur using either rasterization or ray tracing methods. Users can choose the best option for their project depending on the desired quality and performance.

World Building Tools

Unreal Engine 5.3 offers new world building tools that enable users to work on large open worlds collaboratively and efficiently. One of the new features is World Partition, which automatically divides the world into a grid and streams only the necessary cells based on the camera position and visibility. Users can also import and export large landscapes using the new Landscape Heightfield IO framework.

Another new feature is the Procedural Content Generation (PCG) framework, which enables users to define rules and parameters to populate scenes with Unreal Engine assets of their choice. Users can also control the placement, orientation, scale, rotation, and variation of the assets using the new Procedural Placement Tool.

Experimental Features in Unreal Engine 5.3 Preview

Unreal Engine 5.3 also includes a number of new experimental features that introduce new capabilities for rendering, animation, visualization, and simulation.

  • Sparse Volume Textures and Volumetric Path Tracing are new features that enable users to create realistic volumetric effects such as smoke and fire using ray tracing. Sparse Volume Textures allow users to store and sample large volumes of data efficiently, while Volumetric Path Tracing simulates the interaction of light with the volume data.
  • Skeletal Editor is a new feature that allows users to do weight and skinning work for skeletal meshes in-engine. Users can edit the bone weights, vertex influences, and skinning methods using a visual interface.
  • Orthographic Rendering is a new feature that enables users to create parallel projection views of their scenes. This is particularly useful for architecture and manufacturing visualizations and stylistic games projects that require orthographic views.
  • Panel Cloth Editor and ML Cloth are new features that improve cloth tooling in Unreal Engine. Panel Cloth Editor allows users to create cloth simulations based on panels, which are flat pieces of cloth that can be stitched together. ML Cloth is a machine learning-based solver that can handle complex cloth behaviors such as stretching, bending, and tearing.

How to Download

Unreal Engine 5.3 Preview is available for download now from the official website. The software is free for personal use, education, and non-commercial projects. For commercial projects, users are required to pay a 5% royalty on gross revenue after the first $1 million per product per calendar quarter.

Users who want to try out the new features of Unreal Engine 5.3 Preview should be aware that the software is still in development and may contain bugs or issues. Users are encouraged to report any feedback or problems to the Unreal Engine forums or the Unreal Engine GitHub page.


Resources:

https://www.unrealengine.com/en-US/download

https://forums.unrealengine.com/t/unreal-engine-5-3-preview/1240016

https://github.com/EpicGames/UnrealEngine

https://80.lv/articles/unreal-engine-5-3-preview-has-been-launched/

https://www.techpowerup.com/311795/unreal-engine-5-3-preview-out-now

/news/

NVIDIA RTX Unreal Engine 5.2 Branch Released

NVIDIA has announced the release of the NVIDIA RTX Branch of Unreal Engine 5.2, a version of the popular game engine that integrates NVIDIA’s RTX and neural rendering technologies. The NVIDIA RTX Branch of Unreal Engine 5.2 brings the latest advancements of the engine, such as Nanite, Lumen, and MetaHuman Creator, together with NVIDIA’s ray tracing, DLSS, and NGX features.

NVIDIA ACE for Games Sparks Life Into Virtual Characters With Generative AI

Featured RTX Technologies

The NVIDIA RTX Branch of Unreal Engine 5.2 includes several RTX technologies that enhance the graphics and performance of games and applications. These are:

  • RTX Global Illumination (RTXGI): RTXGI provides scalable solutions to compute infinite multi-bounce lighting and soft-shadow occlusions without bake times, light leaks, or expensive per-frame costs (NvRTX 5.0 & 4.27 ONLY). Learn more about RTXGI.
  • RTX Direct Illumination (RTXDI): RTXDI lets artists add unlimited shadow-casting, dynamic lights to game environments in real time without worrying about performance or resource constraints. Learn more about RTXDI.
  • Deep Learning Super Sampling (DLSS): DLSS leverages the power of a deep learning neural network to boost frame rates and generate beautiful, detailed images for your games. Learn more about DLSS.
  • NVIDIA Real-Time Denoisers (NRD): NRD is a spatio-temporal, API-agnostic denoising library that’s designed to work with low ray-per-pixel signals. Learn more about NRD.
  • Deep Learning Anti-Aliasing (DLAA): An AI-based anti-aliasing mode that uses the same technology powering NVIDIA DLSS, giving you even better graphics in your games. Learn more about DLAA.
  • NVIDIA Image Scaling (NIS): A platform agnostic software driver-based spatial upscaler for all games. Learn more about NIS.

What can developers do with the NVIDIA RTX Branch of Unreal Engine 5.2?

Developers can use the NVIDIA RTX Branch of Unreal Engine 5.2 to create stunning and realistic games and applications that leverage the power of NVIDIA’s GPUs and AI. The branch also includes support for NVIDIA Omniverse, a platform that enables collaborative creation and simulation across multiple applications and devices.

This branch is available now for developers to download and use from the NVIDIA Developer website. The branch is compatible with Unreal Engine 5 Early Access 2 and requires an NVIDIA RTX GPU to run.

How can developers get started with the branch?

Getting Started with RTXDI and NvRTX in Unreal Engine 5 (Part 1)

NVIDIA has also provided a number of sample projects and tutorials to help developers get started with the branch, such as the Marbles RTX demo, the DLSS Playground, and the MetaHuman Creator sample. Developers can also access the NVIDIA Developer forums and Discord server for support and feedback.

NVIDIA plans to update the branch regularly with new features and improvements as Unreal Engine 5 evolves. The company also encourages developers to share their feedback and suggestions on how to improve the branch and its integration with NVIDIA’s technologies.

Link: Branch of UE (NvRTX)

Why is this branch important for game development and graphics?

This branch is a testament to NVIDIA’s commitment to advancing the state of the art in game development and graphics. By combining the power of Unreal Engine 5 with NVIDIA’s RTX and neural rendering technologies, developers can create immersive and realistic experiences that push the boundaries of what is possible.

Resources:

NV Branch of UE (NvRTX)

NVIDIA Game Developer on LinkedIn

https://twitter.com/NVIDIAGameDev/status/1673740368528166913

/news/

NVIDIA RTX Unreal Engine

Electric Dreams Environment UE: A Stunning Showcase of UE 5.2

Epic Games has released a new sample project Electric Dreams Environment UE for Unreal Engine 5.2 that demonstrates the power and potential of the latest features and technologies available in the engine. The project, called Electric Dreams Environment, is a futuristic cityscape that was first shown at the 2023 Game Developers Conference.

Electric Dreams Environment Sample Project Unreal Engine

What is Electric Dreams Environment and How Was It Built?

Electric Dreams Environment is a sample project that showcases several new, experimental features and technologies that are available in Unreal Engine 5.2, including:

  • Procedural Content Generation framework (PCG): A system that allows users to create tools, define rules, and expose parameters to populate large scenes with UE assets of their choice, making the process of large world creation fast, interactive, and efficient.
  • Substrate: A material authoring system that enables users to create complex materials with dynamic properties and behaviors using a node-based interface.
  • Unreal Engine’s latest physics developments: A set of features and improvements that enhance the realism and interactivity of physical simulations in UE, such as Chaos Destruction, Niagara Fluid Simulation, and Soundscape.

The project also showcases several existing Unreal Engine 5 features, such as:

  • Lumen: A fully dynamic global illumination solution that reacts to scene and light changes.
  • Nanite: A virtualized micropolygon geometry system that enables users to create scenes with massive amounts of geometric detail.
  • MetaSounds: A high-performance audio system that gives users complete control over sound creation and processing.

The Electric Dreams Environment was built by incorporating both traditional and procedural workflows directly within Unreal Engine using the PCG framework. The PCG framework allows users to create custom tools that can generate content based on rules and parameters that can be adjusted at runtime. For example, the project uses PCG tools to create buildings, roads, bridges, vegetation, props, and more.

The project also uses Substrate to create materials with dynamic properties and behaviors that react to the environment and the player’s actions. For example, the project uses Substrate to create materials for windows, holograms, neon signs, water puddles, and more.

How to Download and Explore it?

Electric Dreams Environment is a free sample project that anyone can download and use for learning purposes. The project is available from the Samples tab of the Epic Games Launcher or from the Unreal Engine Marketplace.

The project is a graphically intensive project that requires a powerful video card to run at a stable framerate. The recommended hardware specifications are as follows:

  • 12-core CPU at 3.4 GHz
  • 64 GB of system RAM
  • GeForce RTX 3080 (equivalent or higher)
  • At least 10 GB VRAM

The project also requires DirectX 12 support and up-to-date graphics drivers.

The project consists of several levels that demonstrate different aspects of the Electric Dreams Environment. Users can navigate the levels using keyboard and mouse controls or by using a drone controller. Users can also trigger different sequences that showcase the PCG tools, Substrate materials, physics features, and more.

Why Electric Dreams Environment Matters

Electric Dreams Environment is a stunning showcase of what Unreal Engine 5.2 can do and what users can achieve with it. The project demonstrates how users can create large-scale environments with high-fidelity graphics and realistic physics using both traditional and procedural workflows within UE.

The project also shows how users can leverage the new features and technologies in UE 5.2 to create immersive and interactive experiences with dynamic lighting, materials, sound, and more.

Epic Games hopes that Electric Dreams Environment will inspire users to explore the possibilities of Unreal Engine 5.2 and create their own amazing projects with it.

Electric Dreams and the Rivian R1T | UE5.2 Demo | GDC 2023

Resources:

Electric Dreams Environment | PCG Sample Project – Unreal Engine

Epic Games Unreal Engine Electric Dreams Art Blast – ArtStation Magazine

Electric Dreams Environment in Unreal Engine | Unreal Engine 5.2 Documentation

More articles: /news/

Electric Dreams Environment UE

Exit mobile version