If you’re hoping for a self-driving car to drive you around in the near future, you may have to re-set your expectations. This is the view of numerous AI experts from around the world, who have argued that the world of autonomous driving may be further off than many expect, or indeed hope, it to be.

Big businesses across the world have ploughed millions of pounds into developing and testing self-driving vehicles. Their results have been promising, and as a result many have made bold predictions about when such vehicles will hit the mass market. Tesla’s Elon Musk claimed in 2015 that we’d already have self-driving vehicles by now, as did Google around the same time. Others have said 2019 will be the breakthrough year, and have even begun planning for such an eventuality.

However, AI analysts have urged caution, suggesting that autonomous vehicles aren’t yet able to sufficiently allay the safety concerns that would make them roadworthy.

The issue surrounds the difference between ‘interpolation’ and ‘generalisation’ methods of learning.

Interpolation involves being able to correctly identify something after having been shown it a number of times. Generalisation, on the other hand, sees devices fill in the gaps when presented with two separate datasets (such as a computer being able to identify a vole after being shown images of moles and mice, then being told the vole lies somewhere between the two).

Generalisation is where AI technology in the automotive industry comes unstuck. Systems can be furnished with millions upon millions of datasets, but they’re still not advanced enough to display the generalisation skills needed to safely manage the chaos of real-life motoring.

Perhaps more worrying still is that motoring companies could be using technology that doesn’t offer the security fail-overs required, simply because they provide results that the company wants.

Gary Marcus of New York University told The Verge: “They’re just using the techniques that they have in the hopes that it will work. They’re leaning on the big data because that’s the crutch that they have, but there’s no proof that ever gets you to the level of precision that we need.”