The programmer has to somehow decide on the rules that the algorithm uses for decision-making.
To grossly oversimplify the process, the programmer no longer defines those rules, but rather codes how an algorithm may learn those rules through observation.
For example, I created a product that learns how to detect construction status (i.e when building work has started/stalled/completed) from aerial imagery using semantic segmentation and change detection. I never coded the definition for a construction site in an image, the model learned it through observation of what I had flagged as construction sites (training data).
The thing that scares people is that how the end model identifies the construction site is not easily explained, in that I can't say "Oh the model sees scaffolding". The model does learn the relative importance of repeatable features, but it assigns its own weighting to what's important as it learns, often resulting in the feature map looking completely wild. But due to this learning process, "Showing" someone how it makes its decision is equivalent to showing someone a massive decision tree with thousands of weights for each decision regarding when a certain process (such as applying simple edge detection) has resulted in a positive result for multiple scales and rotations for each pixel of an RGB matrix.... That doesn't exactly sell well.
Where policies & frameworks need to sit is on the ethical use of AI in government and business, not necessarily AI's development from a university level which I agree should largely go unhindered. The reason is that while the above example IS superior to a human performing the same task (where it was 'wrong' largely just identified where the human equivalent had actually failed), it is also highly specialized and can be prone to bias or not adapt to changes over time if not maintained properly. This becomes problematic quickly in more subjective fields, for example, would you want an AI making a decision about your eligibility for healthcare without knowing what data sources the AI was trained on?
In theory, I agree with cyber. An incredibly well-developed and thought-out ML model with millions of points of training data could and should replace humans in many tasks like CAVs. The role of Government is to ensure such models ARE well-developed and thought-out and do not in any way favor one demographic over another in decision making. HP's face detection that couldn't identify people of color is a good example of where AI can go horribly wrong.
For this reason, it IS important (in my field of work) to have a decent definition, and right now there really isn't one. This is why I largely avoid the term and try to say exactly what we do instead when asked. We don't 'do AI', We adopt machine learning techniques to automate or improve data generation and analysis workflows, which imo sounds more impressive anyway .