This report by the Chartered Accountants Australia and New Zealand explores the link between AI and ESG, pointing at the challenges ahead for finance and accounting professionals, as well as for society at large.
The extremely ambitious goals defined by international agencies worldwide are sure to imply a high risk of misrepresentation of green credentials by some organizations which might indulge in cherry-picking what to disclose or when to disclose it, or «via high profile public campaigns that implicitly or explicitly suggest certain actions, but where the subsequent fulfilment is absent».
AI can help to tackle this problem: for instance…
As reported by the MIT Technology Review, «by capturing 40 pieces of data per person — from iris scans and family links to their favorite fruit — a system meant to cut fraud in the Afghan security forces may actually aid the Taliban».
In order to mitigate the risk of scenarios such as this happening worldwide, facial recognition technologies must be regulated in the stricter possible way and with the broadest possible international consensus.
These Guidelines adopted by the Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data of the Council…
Governments around the world «are increasingly turning to algorithms to automate or support decision-making in public services». We can go to even greater lengths and state that algorithms are “eating” the negotiation ground of any present-day social contract, however defined, and this trend will only accelerate in the foreseeable future.
Algorithms will be soon part of any policy implementation, or, for that matter, of any model of society. They will define the way people and organizations interact with each other and with non-human entities.
The adoption of algorithms on a larger and larger scale is justified by the pursuit for…
Data is frequently likened to commodities or essential goods such as oil, yet it has its own dynamics which deceptively simple metaphors might fail to capture. It has a peculiar way of generating value, «its power is relational and cumulative, realized in the aggregate», which is why the “privacy agenda”, whilst crucial as far as people’s fundamental rights are concerned, has not been and will not be enough to mitigate data inequality, defined as «the uneven sharing of gains from a resource whose value is created by many stakeholders».
Whilst there is no silver bullet to fix an issue which…
This is a broad yet crucial question to ask, as culture embodies the “semantics” (as Vygotsky would say) as well as the value system of any social structure, and thus it is the most powerful conveyor belt of experience, know-how, and ideas within the organization. In a way, culture «engender shared sensemaking».
And when it comes to enterprises facing daunting transitions it is pivotal for leaders to see culture up close, so as to avoid bias and pitfalls which might be «deeply woven into the organization’s social fabric».
This cardboard by MIT Sloan Management Review is a good primer in assessing, and then in preparing and measuring those very cultural shifts the post-Covid-era is making overdue.
In one of the first papers on this topic ever, the OECD studies anti-competitive behavior in a decentralized digital setting, i.e. on distributed ledger technologies and blockchains.
From the standpoint of antitrust law, the concerns remain the same as in traditional markets. Competition policymakers should focus on the development of permissioned blockchains, which, rather unexpectedly, pose a much higher risk than fully permissionless ones.
Research has shown that users of digital systems are already having a hard time telling whether they are interacting with AI enabled tech. Even for experts, predicting where and how AI will be used is becoming increasingly difficult.
Hence, instead of working on a case-by-case basis, which is hard to scale and prone to error, it is high time we embrace an AI by default perspective, assuming that every system does or will incorporate some form of AI.
And this is especially true of AI bias: as the National Institute of Standards and Technology (NIST) suggest in Spec. Publ. 1270…
As everybody familiar with the field is aware, a generally agreed-upon definition of AI does NOT exist.
The AI Regulation drafted by the European Commission only defines AI either by attempting a scantily sketched taxonomy in Annex I (machine learning, statistical approaches, formal approaches, and that is all as of now) or through a teleological approach (which are the objectives of an AI system?).
However, while it is undeniably true that artificial intelligence in general has been a moving target since the Fifties, it is also true that some kind of all-encompassing definition is much needed, lest such a yawning…
Imagine you are pushing your trolley down the alleys of your favorite supermarket. You are now at the counters, you whip out your EU-designed payment app and snap! The groceries are paid for at the touch of a button. No extra-EU payment network was involved in settling the transaction.
This scenario will certainly unfold in the near future as a few days ago, precisely on July 14th, the European Central Bank (ECB) has inched closer to the definition of a preliminary plan for the legal and technical design, development and deployment of a digital euro. As ECB Board Member Fabio…
Privacy has come under hard scrutiny in Italy as of late, especially as some manoeuvres to revamp a couple of government digital platforms (namely, IO and Immuni) so as to implement the EU Digital COVID Certificate in the Italian cyberspace have triggered interest by the Data Protection Authority. The latter decided to forbid the use of the IO app as it was since, reportedly, said app was in violation of several GDPR articles.