Vol.71 - Ready or Not Development Briefing
Author: catindabox,
published 7 months ago,
[h1][b]Attention Officers, [/b][/h1]
Thank you for joining us for the 71st edition of our development briefing, March 29th, 2024!
This week we’ll look at our new dynamic dialogue-related systems for AI which will give us more flexibility and nuance for storytelling, as well as showcase some of the recent work on our lip sync system.
[i]These development briefings serve to keep you in the loop about parts of our ongoing support for Ready or Not, although they do not encompass everything that we’re working on at a given moment. [b]Please keep in mind that everything in this development briefing is work in progress and subject to change.[/b] [/i]
[i]Vol.71 Development Briefing [b]Summary Points: [/b][/i]
[i][b](not a changelog) [/b][/i]
[list][*] Dynamic Dialogue System [list] [*] Allows for easily creatable dialogue strings between multiple characters
[*] Intricate level of customization
[*] Complements existing reaction voice line system
[*] New dialogue being written and recorded[/list] [*] Lip Sync System [list][*] Video showcase of the recent early implementation of our lip sync system[/list] [/list]
[h2][b][u][i]Dynamic Dialogue[/i] [/u][/b][/h2]
When you and your SWAT AI officers are moving through missions you may have noticed their penchant for verbally reacting to their environments. With a few notable exceptions, these reactions are generally single one-liner style voice lines prompted by moving through invisible spaces that we call “reaction volumes.”
What we are working on now is a supplemental new, more streamlined and nuanced dialogue system that will allow for easily creatable dynamic dialogue strings between characters on a large scale. These interactions can be triggered by invisible spaces called “dialogue volumes.”
A great beauty of this new system is the extreme ease and depth with which the dialogue parameters can be customized by any developer working on a level.
How and when dialogue is triggered, the number of participants or voice lines involved, how dialogue audio is emitted, and granular elements of dialogue content are all among the variables we can adjust.
For the dialogue volumes themselves we can tweak the trigger delay, which specific characters can trigger the volume (i.e. SWAT, the player, civilians, suspects), and which characters must be present to trigger a particular dialogue.
To avoid repetition, we can also specify whether a dialogue is supposed to be repeated as well as give certain dialogue strings or lines a random chance of occurring in the first place.
The dialogue voice line emitter can be attached to practically anything, which opens up a lot of avenues. This also means that static environmental characters like those in the Station can easily have dialogue as well.
[u][i](Image below: Examples of some static character scenes around the Station that may benefit from this system)[/i][/u]
[img]https://clan.cloudflare.steamstatic.com/images//36750080/45f242d98133a47cec8b27f90f82276d438ccc81.png[/img]
[img]https://clan.cloudflare.steamstatic.com/images//36750080/18a84f321a90be76e62aca971c4f0f443ec90eb4.png[/img]
[img]https://clan.cloudflare.steamstatic.com/images//36750080/3e04fc673dd8cfcf0126b6e65b8bf944520bb24b.png[/img]
The content of the lines for each participant in a dialogue string can be set to play a specific voice line, or a randomized voice line that corresponds to a given situation. The given order of each voice line and which character says it are also conveniently specifiable. If a character isn’t present or required for a dialogue, then their line can be specifically left out from the dialogue string.
We have more dialogue on the way being written and recorded that should be able to plug right into this setup too.
All together, these feature configurations harmonize to create a dynamic character-rich atmosphere that complements our existing reaction voice line system.
[h2][i][u][b]In Sync [/b][/u][/i][/h2]
The lip sync animation system that was mentioned in our previous development briefing has undergone a facelift during implementation which we're ready to show off.
After plenty of tweaking to get out of the prototype phase, the lip movements generated by our system are progressing nicely and pairing with some additional facial animations.
Still, the feature is in its early stages, pending work on improved animation integration for head and eye tracking movements relative to in-game objects. In its completed state it will serve as a nice level of polish to our dialogue system and voice lines.
[i][u](Video below: SWAT officer model making callouts with the WIP lip sync system, utilizing alert and angry voice lines) [/u][/i]
[previewyoutube=wJhYTGbPyUI;full][/previewyoutube]
[h1]Conclusion[/h1]
As you can see, our longer-term AI enhancements are not just limited to behavior but also cover improved storytelling and immersion across the game.
These new systems are potentially applicable to both current content and future content.
This concludes our 71st development briefing. Be sure to tune in next time for more development news!
[i]Special thanks to Zack for his photos and brilliant work on the new dialogue system, and to Alex and killowatt for their work and footage on this high fidelity lip sync system.[/i]
[img]https://clan.cloudflare.steamstatic.com/images//36750080/90eea2aeb7449fef24bf55b66fe1bc4c923c6b69.jpg[/img]
Make sure you follow Ready or Not on Steam [url=https://store.steampowered.com/app/1144200/Ready_or_Not/]here[/url].
Our other links: [url=https://discord.gg/readyornot]Discord[/url], [url=https://twitter.com/voidinteractive]X[/url], [url=https://www.youtube.com/channel/UCj3qNoOLkzHJ60vxG6xQvlA]YouTube[/url], [url=https://www.instagram.com/void_interactive/]Instagram[/url], [url=https://www.facebook.com/VOIDInteractive/]Facebook[/url].
[h3][b]Stack up and clear out. [/b]
[i]VOID Interactive[/i] [/h3]