What that means though, is that it will constantly improve over time as new models are trained. Just my speculation based on 30+ years of software development myself… still, that does not solve our noise problems! The good news is that Topaz has embraced the use of machine learning and is actually putting it to use instead of using the word ‘AI’ to simply sell products. We see this with other companies such as Adobe and their evolution of Lightroom. This is largely acknowledged in one or more Topaz blogs. There is both a business imperative to do this (sell more products) and the reality that the vision of Topaz is evolving/re-factoring and along with it, its development team and methodologies. If you recall the history of Topaz products, they have had stand-alone products in the past (like Remask, Impression, Texture, etc.), integration products like photoFXlab, Studio (that integrated many of the previously stand-alone products) and now back to stand-alone products. There is a reason that the new AI products are stand-alone - they are completely different code bases (from Studio) and they appear to share much of the same code for the user interface, etc. I suspect that the second possibility is more likely. The second is that the AI Clear ‘model’ itself is in fact being utilized however the Denoise AI code (algorithm) that executes it is different than the code that executes it within Studio. The first is that the AI Clear model was intentionally manipulated to produce an inferior result than the ‘native’ model in Denoise AI in order to boost the ‘native’ model’s appeal.
0 Comments
Leave a Reply. |