Approximating an intractable posterior distribution by an approximating distribution of more convenient form is a central problem in Bayesian inference. A popular method of solving this problem is structured or fixed-form Variational Bayes, which works by numerically minimizing the Kullback-Leibler divergence of an approximating distribution in the exponential family to the intractable target distribution. An efficient way of performing this numerical optimization is by the use of a quasi-Newton method called natural gradient descent. We analyze the functional form of these natural gradients in a structured Variational Bayes setting and we present some results that facilitate their calculation. We then show how we can stochastically approximate these natural gradients in very general settings, which allows us to automate the optimization procedure and to extend the structured Variational Bayes approach to problems where it was previously not applicable. Our proposed algorithm makes approximation of posterior densities as easy and user friendly as gradient based optimization: The user supplies as input a function to calculate a (unnormalized) log posterior density and/or its gradient for given parameters, and the algorithm outputs a structured variational approximation to the corresponding posterior distribution. The type of approximating distribution is chosen by the user and can be any distribution in the exponential family or any mixture of such distributions, which means that our approximations can in principle be made arbitrarily precise. Despite its generality, we find our approach to be very competitive with Monte Carlo approximation, both in terms of speed as well as accuracy.