AvaKit v1.1.7 Update

AvaKit

AvaKit: all-in-one streaming app for VTubers. You can start real-time mo-cap with just a single camera. High-quality rendering with 100% lilToon support, perfect sync, and various contents AvaKit prepared for you will take you to the awesome virtual world.

[h3][b]For now, only Windows can be updated.[/b][/h3] [url=https://store.steampowered.com/news/app/2363370/view/3671048540134058686]The reason why only Windows is available is written here.[/url] AvaKit update week has come with interesting new features! Your feedbacks are really helpful to develop better AvaKit 🙇‍♂️ [img]{STEAM_CLAN_IMAGE}/43977778/e82b5d9e500f67edc3da72f98d9c98eb6a56cdfb.png[/img] [h2]Upgraded VMC protocol[/h2] Currently, three ways of tracking are available in AvaKit - webcam tracking, perfect sync using iFacialMocap, and VMC protocol. There are couple of demands related to the precision of VMC protocol, and this update includes the improvement of entire VMC protocol configuration. Previously, motion data of lower torso from VMC protocol cannot be received since it is worried that it can deter the stability of AvaKit. However, these data are receivable from v1.1.7, which means that[b] the whole body motion data from VMC protocol can be sent [/b]to AvaKit. Also, you can select which motion data of specific body parts from head to both legs will be received from v1.1.7, which provides more various options for AvaKit users. [h3]Compatibility with VRM Posing Desktop[/h3] [url=https://store.steampowered.com/app/1895630/VRM_Posing_Desktop][b]VRM Posing Desktop[/b][/url], which is used to implement simple animation or to create precise posing for thumbnails is now compatible with AvaKit. This can be implemented through VMC protocol. For example, after you strike the pose in VRM Posing Desktop like below, [img]{STEAM_CLAN_IMAGE}/43977778/5b1d2de7f3b13de6581d164cd71f22dbf50246bc.png[/img] then choose the tracking method as 'VMC protocol' to receive the data. The result can be shown as follows. [img]{STEAM_CLAN_IMAGE}/43977778/1b6c5e9556af21f545db2a589946e212c67fa100.png[/img] Like mentioned above, only motion data of partial body part like legs or head is also available. (But, please note that the data already received cannot be reset even if you disable the part.) [img]{STEAM_CLAN_IMAGE}/43977778/fd1827c8155182233ce0895a54da568ece8c30ab.png[/img] The demo video is below. [previewyoutube=Rd7qD2hKaXA;full][/previewyoutube] [h2]Mouth tracking sensitivity added[/h2] You can define the sensitivity of camera-based mouth tracking from 0 to 50 from v1.1.7. The higher the sensitivity is, the more dramatic that the mouth of 3D avatar model will be moved. [previewyoutube=QwH5ur1CpfE;full][/previewyoutube] [img]{STEAM_CLAN_IMAGE}/43977778/6585df6e272c8bc775c7d0c46fbbe445eec12dbe.png[/img] [h2]Rendering resource optimization[/h2] We discovered the process that the rendering resource is unneccessarily wasted. Hence, the optimization for the process is implemented in this update. We'll always try our best to improve AvaKit. Thank you. [url=https://twitter.com/AvaKit_EN][img]{STEAM_CLAN_IMAGE}/43977778/b22c8f31dbd6bc915906c9722dac400324685d29.jpg[/img][/url] [url=https://discord.com/invite/e8qGrZA3CT][img]{STEAM_CLAN_IMAGE}/43977778/b7437995b02ecc0e033e7a84ea8c9a02413d51c1.png[/img][/url]