I spent last weekend updating the automated test system/mission. I've discussed the [url=https://wx3.com/building-the-mission-system-for-an-open-world-rpg/]mission system before[/url]: it handles not only player missions, but all "ad hoc" logic like tutorials, scripted events, random void events, etc. It is also how I run automated tests of key missions: basically a series of missions try to "speed run" the game with a lot of cheats by teleporting around and interacting with anomalies and characters. This is obviously not the same as how a player plays, but it does give [i]some[/i] level of confidence that there's no bug making it impossible to reach the end. For example, if a dialogue that triggers the spawning of a key artifact has an unsatisfied condition or error, the test will get stuck on it. However, the nature of the game means that sometimes unexpected things happen during the test. Not only due to random events, but just the chaotic nature of the universe means that small changes can ripple. So the precise sequence of events that even the automated test encounters is inherently unpredictable. This is not an inherently bad thing. But it does mean that sometimes an automated test fails for reasons not related to bugs: e.g., a new faction hailing the player when the test was expecting to interact with some planet or artifact. As the game has gotten larger, Murphy's Law conspires with the Law of Large Numbers to increase the probability of automated tests failing. If there's a 0.1% chance a mission node will unexpectedly block a test run, that's not a big deal when a test run hits only 100 mission nodes-- I can just run the autotest again the 10% of the time it fails. But with a 1000 nodes, it will fail most of the time. So I spent last weekend making the automated tests more robust, including spending a good chunk of Saturday tracking down a bug that turned out to be in one of the mission systems' "cheats". By Monday, I had the system robust enough that it could get through the content up to the new ending 90% of the time. Armed with this more robust system, I went ahead and made some refactors that I'd been putting off. For example, the game has two very similar systems of spawning recurring NPC encounter groups. One was the first version implemented and was used mostly in the early part of the game, the other was a more flexible system added later. They both work, but having both is a form of technical debt: if some faction unexpectedly had too few or too many ships I had to investigate which system was being used. So I got rid of the first system. Finally, yesterday, I started a full playthrough of Jupiter with the goal of identifying changes I want for the next opt-in build. I have a tentative goal of getting it out by the end of the month, but this will depend on how the rest of the playthrough goes. Major tasks completed in the past week: [list] [*] Updated automated test system [*] Implemented copy/cut/paste for the mission logic editor (I now regret having not done this months ago, it turned out not to be that hard and would have saved so much time) [*] Fixes for various bugs encountered [*] Added mission debugging features [*] Refactored encounter spawns [*] Fixed an issue causing periodic skill checks to generate unnecessary memory allocations (garbage) [*] Began full playthrough [/list] Until next week! - Kevin