Interpretability is a weak spot of machine learning and AI that makes applying them to economic policy-making problematic. What if we can change that? This exactly what we are busy with.

Hard-feeding assumptions and measuring the impact? Now after extensive effort it can be done but this is not all. We can actually do even better! Who said that our ideas about assumptions are sensible? Do we account correctly for bidirectional causality between assumptions and target variables? What if we are wrong?

Here we can ask AI to figure out assumptions for itself, without our help and see the results.

For example, what if 2014 Russian crisis replayed tomorrow. How would it look like? The AI answer is that it would probably involve much smaller ruble devaluation, inflation going to 5-5,5% area milder recession and CBR key rate of 9.5-10%

Stay tuned!

Leave a Reply

Your email address will not be published. Required fields are marked *