Looking for ways to visualize machine learning processes. Neural nets are trained to recognized cancerous Ki-67 marked cells in biopsies. Instead of just counting the cells, a neural style process is run to produce infinite zombies in their place.
Why? Well, the hidden decision process in machine learning algorithms is problematic. If you are a doctor who has to decide if the kidney is to stay or will be removed, you need to be able to trust your data. ML results are very impressive sometimes, and could automate the time-consuming manual calculations, making diagnostics faster. In some cases, however, the results are simply wrong, which would mean wrong diagnosis and treatment. So we need to open up the black box of the decision making process of neural nets, instead of just showing the end results. One approach is heatmapping, a practice of running a backwards pass in the network and then highlighting the areas in which the presence or the absence of a signal was important for making the decision. Their demos are impressive - draw a digit here and it will highlight exactly the areas important for decisions.
My plan is to open up the black box of decision making a bit and create a kind of visualization of the decision process that is not easily understood, but more of an interpretation. In the same way we don’t see chemical formulas or numbers in a fire, but with a bit of experience we can make statements about the quality of the wood, humidity and the wind conditions around the fire. A similar kind of unsignified visual representation could train the brains of the doctors to interpret complex patterns and make correct decisions about quality of the data.