Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 20th Sept 2024, 05:53:00am AEST

 
 
Session Overview
Session
Day 2 Session 1: Paper Presentations
Time:
Thursday, 03/Oct/2024:
9:30am - 11:00am

Session Chair: Milly Stilinovic
Location: R.D. Watt Building

Camperdown NSW 2050, Australia

Show help for 'Increase or decrease the abstract text size'
Presentations

AI, verifiability and the epistemic commons

Heather Ford, Michael Davis

UTS, Centre for Media Transition, Australia

Repositories of open knowledge, such as Wikipedia, are increasingly used as data sources for commercial products like knowledge graphs, virtual assistants and generative AI (Ford & Iliadis, 2023). However, the demands of expediency are threatening critical elements of the open knowledge ecosystem in what McDowell and Vetter (2023) have recently called the ‘re-alienation of the commons’.

In this paper we focus on verifiability, a key element of the open knowledge ecosystem. On Wikipedia, for example, where it is a core content policy, verifiability underpins both the epistemic value of Wikipedia itself and of Wikipedia data that is extracted for use in other systems. All contestable claims on Wikipedia must be attributed to reliable sources, enabling a system of public verification. But with the rise of ‘knowledge as a service’ (Zia et al., 2019), verifiability is under threat – apparent in unreliable or non-existent attribution and in obstructed pathways to participation in the epistemic commons. Beyond this looms the threat of increased privatisation and sequestration of open knowledge into a landscape of digital walled gardens.

Through a framework grounded in social epistemology, this paper articulates the value of verifiability as both moral and epistemic, in the sense that impeding access to communal processes of verifiability could be considered an injustice, leading to the unequal distribution of epistemic value (van den Hoven & Rooksby, 2008). This situates our work among recent scholarship on epistemic rights (Flew, 2024; Nieminen, 2024) and duties to mitigate AI-generated ‘careless speech’ (Wachter et al., 2024). We also consider how AI companies’ power to transform the structure of the knowledge ecosystem (Lazar & Bullock, 2022) demands a regulatory focus on defining and managing the appropriate limits of that power.



Architecture of AI and provision of essential digital services

Miah Hammond-Errey

Sydney University, Australia

Policymaking for the Internet is complex and contested (Kettemann, 2020, Belli et al, 2023). This paper explores the impact of emerging technology, Artificial Intelligence (AI), on Internet Policy.

The infrastructure underlying and essential for AI systems includes data, compute capacity, underlying connectivity and workforce (Hammond-Errey 2023). These are referred to collectively as the ‘architecture of AI’ Hammond-Errey 2024).

Considering these aspects collectively as the architecture of AI enables us to examine broader consideration of the actors (companies and individuals) who are involved in AI and the technologies and supply chains it relies on. It also encourages us to explore the potential changes of AI and its underlying infrastructures and architectures and their implications for internet policy.

This paper argues that the architecture and infrastructures of AI should be considered as essential digital services. These digital infrastructures are just as essential as energy or water for society but are not considered ‘utilities’ and so the provision of their services is not standardised or regulated. While they are often considered in terms of critical infrastructure, the architecture of AI that underpin these digital platforms are broader and in critical need of stable, reliable provision. Unlike traditional utilities like telecommunications, water and energy, digital platforms have solely been created in the commercial space. Cases like the recent Google anti-trust ruling and the CrowdStrike outage shine a light on the need to standardise the supply of digital services (Hammond-Errey 2024b).



Artificial Intelligence and the Creative Industries

Terry Flew, Jonathon Hutchinson, Wenjia Tang

The University of Sydney, Australia

The Hollywood Writers Strike of 2022 drew attention to the extent of the emerging conflicts between creative workers and major creative industries forms over how artificial intelligence (AI) may be used in current and future creative work. While earlier debates about the impacts of AI, machine learning and robotics on the future of work tended to differentiate between unskilled and semi-skilled work, which was seen as highly susceptible to automation, and creative work which was seen to be largely immune (Frey and Obsorne, 2013), the rise of Generative AI has drawn attention to the degree to which automated text, sounds and images may substitute directly for human labour.

This paper will examine some of the tools which are available using AI for creative work, ranging from those where there is a long history of application, such as virtual reality and digital effects, to those which can be directly substituted for creative work, including content creation and information enhancement tools. It will consider different approaches to the relationship between people and technology in creative work, from those who stress the uniqueness of human capabilities (Anantrasiricha and Bull, 2022; Fjelland, 2020), to those who have long stressed how augmentary digital technologies fundamentally shape human creative capacities at any given time (Boden, 2014; Lee, 2022). It will also note that sitting under the general umbrella term "creative work" are a range of both highly abstract and specialised activities and those of a more mundane nature, with the latter particularly susceptible to transformation through AI.



Harnessing and directing the power of data

Thomas William Barrett

United States Studies Centre at the University of Sydney, Australia

Data has become a core source of modern power. It underpins global trade, the management of power grids and technological innovation. Data's significance has been accelerated with the rise to prominence of artificial intelligence and the ratcheting up of geopolitical tensions. How governments approach data will be instrumental to internet policy and shape the very functioning of our future societies.

This paper maps out how different governments perceive data and considers the resulting implications for governance of the internet and the modern digital economy. It highlights different policy approaches to data inputs and sources; infrastructure for storage and analysis; and data end-uses. Drawing on a combination of key interviews, scholarly research and data analysis, it focuses on the United States, Australia, the European Union and the United Kingdom as central case studies.

Sources for data are expanding through new inputs (such as connected vehicles or consumer neurotechnology) and aggressive use of existing inputs (such as web scraping). In addition, the capabilities and infrastructure that collects, transports, stores and analyses data are increasingly concentrated amongst a small set of technology companies and data brokers, while government concerns around sovereignty and data security are simultaneously rising. Finally, new uses for data – from creating deepfakes to tracking CO2 emissions – continue to grow. Taken together, these changes place data squarely at the centre of internet policy debates, which are also being shaped by other dynamics in governments and the private sector.

Drawing on this taxonomy, this paper considers how this new framing of ‘data as power’ is impacting domestic and international policymaking across the globe, and what that means for internet policy. Failing to understand how governments consider data imperils any effort to bring together different governments and governance mechanisms to build cohesive internet policy that ensures a free and open internet, and global digital economy.



One Size Doesn’t Fit All: Understanding Social Media Platform Regulation in India

Tania Chatterjee1, Agam Gupta2, Pradip Thomas3

1University of Queensland-Indian Institute of Technology Delhi Academy of Research, Australia; 2Indian Institute of Technology, Delhi; 3University of Queensland

The perils of online communications have led to growing calls to regulate social media platforms. Even scholars who have been ardent advocates of internet non-regulation (Balkin, 2020) are now writing about how the internet requires some regulation. While platform self-regulation always existed, state regulation is taking centre stage. State legislation often falls short because traditional laws are not well suited for these new technologies (Ghosh, 2021). Set in the context of this regulatory challenge, this paper examines India’s Information Technology Act, 2008, which regulates social media platforms and the intermediary responses to said liabilities.

Guided by the specific questions of: i) how does the Indian state expect social media platforms to regulate content? and ii) how do platforms comply with the liabilities imposed? We undertake three sets of document analyses running 236 pages. First, an analysis of India’s Information Technology (IT) (Amendment) Act, 2008 and IT Rules to understand Indian state’s expectations. Second, an analysis of monthly compliance reports of X (formerly Twitter), WhatsApp, and ShareChat, scheduled as significant social media platforms by the legislation. We deemed these compliance reports a reliable source as they correspond to the legal compliance requirements and would allow us to spot the gaps in policy and practice and . Delineating data from the compliance report into proactive and reactive monitoring, we argue that there seems to be no common understanding of the regulatory expectations among these platforms. Questioning such variations, a third set of document analyses of the said platforms’ About Us pages and their content policies, along with reading scholarly works on the evolution of the platforms, allowed us to chart the platform’s technical, social, and economic characteristics. We find that a diverse understanding of the same regulatory expectation is embedded in the distinct socialities (van Dijck, 2013) offered by the platforms, which shape and are facilitated by the technical architecture. It further impinges upon their content policies and supports the platform’s business model (Postigo, 2014), making legislative compliance difficult.

Thus, we question the legislative efficacy of classifying all social media platforms into one aggregated category and framing common rules. We make a case for the legislative limitations of intermediary liabilities and also bring to the fore the limitations of self-regulation by platforms. These limitations put healthy digital communication at stake.



The intersection of internet infrastructure and privacy

Ryan Payne

University of Canberra, Australia

The intersection of internet infrastructure, artificial intelligence (AI), and the burgeoning volume of data raises significant research concerns, particularly regarding privacy and data transfer misuse. AI systems are increasingly reliant on vast datasets, and when combined with the Internet of Things (IoT), they create an environment ripe for both innovation and risk. The three Vs of big data—volume, variety, and velocity—are particularly relevant here. The sheer amount of data (volume) used for training AI models, the diverse types of data (variety) that enable complex inferences, and the rapid pace of data processing and sharing (velocity) all contribute to potential privacy and infrastructure issues (Kerry, 2020).

Legally, the protection of privacy and data varies globally. Although privacy is a recognized human right under documents like the Universal Declaration of Human Rights (United Nations General Assembly, 1948) and the European Convention on Human Rights (Council of Europe, 1950), data protection laws, such as the European General Data Protection Regulation (GDPR), differ in scope and enforcement (Keane, 2021). These regulations aim to specify data collection practices, retention periods, and consent requirements (Zenonos, 2022). And facilitate emerging privacy-preserving machine learning and differential privacy techniques, such as introducing noise to data to obscure individual identities in case of data breaches (Wood et al., 2018).

However, the infrastructure of the internet itself with the explosion of data caused by AI usage, and the emerging growth of privacy transfer protocols being attached onto information slows, clogs or overloaded the pipelines of the internet itself. As the “internet” gets overloaded, it risks net neutrality and give rise to another splitter from the world wide web internet commonly used. Rather, the mentality to add more to protect privacy is having downstream effects affect its result. Therefore, the need to consider privacy by design (PbD) (Cavoukian, 2020) and down coding, or making code into is a minimally needed state, to reduce excess data being transferred is discussed as a way to consider privacy policy.

Reference

Cavoukian, A. (2020). Understanding how to implement privacy by design, one step at a time. IEEE Consumer Electronics Magazine, 9(2), 78-82.

Council of Europe (1950) European Convention for the Protection of Human Rights and Fundamental Freedoms, as amended by Protocols Nos. 11 and 14. Available at: https://www.refworld.org/docid/3ae6b3b04.html (accessed 2 August 2024).

Keane J (2021) From California to Brazil: GDPR has created recipe for the world. Available at: https://www.cnbc. com/2021/04/08/from-california-to-brazil-gdpr-has-created-recipe-for-the-world.html (accessed 2 August 2024).

Kerry C (2020) Protecting privacy in an AI-driven world. Available at: https://www.brookings.edu/research/protectingprivacy-in-an-ai-driven-world/ (accessed 2 August 2024).

United Nations General Assembly (1948) Universal Declaration of Human Rights. United Nations. Available at: https://www. un.org/en/about-us/universal-declaration-of-human-rights (accessed 2 August 2024).

Wood A, Altman M, Bembenek A, et al. (2018) Differential Privacy: A Primer for a Non-Technical Audience. Vanderbilt Journal of Entertainment & Technology Law. Available at: http://privacytools.seas.harvard.edu (accessed 2 August 2024).

Zenonos A (2022) Artificial Intelligence and Data Protection. Available at: https://towardsdatascience.com/artificialintelligence-and-data-protection-62b333180a27 (accessed 2 August 2024).



(Little) Appetite for Disruption: The view of news media subsidies towards the turn to online news

Timothy Koskie

University of Sydney, Australia

Cries of news media in crisis have been such a mainstay of public discussion that they have come to characterise much of the history of journalism, but the reliability of this condition can mask underlying conditions that are indeed shifting substantially, with challenges for consumers, governments, and the media organisations themselves. With the commercial model for funding journalism facing increasing struggles, there has been a recent rise in governments adopting subsidies to support both consumers and media producers as the news increasingly moves to online spaces. Frequently labelled ‘innovation’ subsidies and funds, these approaches are diverse but share some core characteristics. In this research, I conducted policy analysis on direct and indirect subsidies arising in Australia, the UK, Canada, NZ, Norway, Belgium, and South Korea to identify those policies with an explicit focus on innovation and digital transition. An examination of policy documents and related material finds that adaption to and adoption of digital media production and distribution are framed as inevitable, but interventions to support changes tend to be time limited and narrow in scope, as well as frequently focusing on existing media organisations in legacy media. This study seeks to identify where the policy objectives are disconnected from the execution of the subsidies and what that means for the potential impact of the subsidies as well as how subsidies might be shaped for better results in the future.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Policy & Internet Conference
Conference Software: ConfTool Pro 2.6.151
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany