A ‘how-to guide’ to legally and ethically leveraging AI in your medical practice
One of the most transformative developments in recent years is the emergence of artificial intelligence (AI), and the impact that it can have (and is currently having) in the medical field is profound. AI tools like Heidi and their competitors offer substantial benefits, from optimising operational efficiency to enhancing diagnostic accuracy.
However, using AI also comes with risks and certain legal and ethical considerations that you may not have thought about yet and require careful navigation. In this article we explore why medical practices are beginning to implement AI and in what ways. We then consider the risks this can present for your practice and ultimately your patients, and how to navigate these challenges.
How can I use AI in my practice?
The opportunities are seemingly endless! While AI can be used for diagnosing and analysing scans and samples, one of the most common ways we are seeing medical practices use AI, is to streamline their routine administrative and clinical tasks. This can have various benefits, including allowing doctors to focus more on patient care and reducing costs.
One of the most popular platforms among our medical practice clients is Heidi. If you are not familiar, Heidi is an AI-powered platform designed to streamline various aspects of a medical practice, from administrative tasks to clinical decision-making support. By leveraging machine learning algorithms, Heidi can assist with scribing and creating referral letters. The goal is to enhance efficiency, reduce errors, and ultimately improve patient outcomes.
While Heidi may currently be one of the most prominent platforms, several competitors offer similar AI-driven capabilities with varying focuses (e.g. some platforms focus on administrative support while others focus on diagnostic support). The scope for AI to assist is vast, but unfortunately so are the potential problems and unwanted outcomes. It also necessitates adherence to best practices to mitigate risks and ensure compliance with legal, regulatory, and ethical standards specific to Australia.
How can I safely use AI in my practice?
Whether you are already using AI or are just thinking about it, here is a list of the necessary considerations and best practices to help guide successful and legally compliant use of AI in your medical or health practice.
Legal Considerations
Privacy
The use of AI in healthcare raises various concerns regarding data privacy and security. As you would likely already know, in Australia, personal information is protected under the Privacy Act 1988 (Cth), as well as under some state and territory legislation. There are also increased protection obligations for ‘sensitive information’, which includes ‘health information’ and therefore, is particularly relevant for healthcare providers to consider when using AI in their practice. The more data that you are inputting into AI, the greater the risk that privacy issues may arise.
A medical practice’s privacy policy should be updated to contemplate the use of AI. This helps patients and other users understand how AI and their data will be used in their care and assists in protecting the practice and helping it to comply with Privacy Act obligations.
It is also important to consider the AI platforms’ policies. For example, Heidi is considered APP compliant and is committed to the Australian Privacy Principles. However, if you are unsure, it is worth investigating whether the Privacy Policies and Terms of Service of the AI platform are ‘up to scratch’ in meeting the necessary standards and will not expose your practice to risk. This will also help cross the t’s and dot the i’s when it comes to using the AI platform only as it is intended to be used e.g. some platforms may require you to maintain complete and accurate records and documents regarding your use of the platform.
Consent
It is also important to consider if you are obtaining patient consent before inputting a patient’s personal and sensitive information into an AI platform. This may be mandatory in your state, but in any case, is best practice. It should form part of your Terms of Service, the standard informed consent process, and should be documented in the patient’s record. You may also wish to consider updating your Patient Consent Form or New Patient Registration forms to include this information.
Transparency is always important, and patients will want to be informed about the use of AI in their care. Depending on the AI software used, this may include explaining what it does, how it will influence their treatment, and/or any potential risks or benefits. This would be particularly important when it is used in diagnostic or treatment decisions. AI should support, not undermine, the patient's right to make informed decisions about their care.
Employment Contracts, Services Agreements & Policies
You may wish to also include an AI clause in Employment Contracts or Services Agreements if the employee or contractor will be using AI as part of their work. Particularly for employees, a clause like this could specify the roles and responsibilities of the employee concerning AI tools. This clause could also clarify liability in the event of errors or issues arising from the use of AI, which could help protect both the practitioner and the practice. However, it would be prudent to draft any clause in a way that makes it applicable and relevant as AI and technology advance.
Having an organisational policy in place to regulate the use of AI in your clinic or practice can ensure that AI is used in accordance with your practice’s values and allows more flexibility, as a policy can be updated as the technology advances more easily than a contract.
Regulatory Compliance
The key takeaway regarding regulatory compliance is to keep abreast of evolving regulations and guidelines related to AI in healthcare. This may include signing up for newsletters (for a start, the You Legal newsletter if you are not already subscribed!) or regularly checking for updates from regulatory bodies and adapting your practices accordingly. For example, the Australian Digital Health Agency (ADHA) has various priorities and initiatives underway regarding a plan for emerging technology such as AI. It is important to be aware of any updates from the ADHA and other similar agencies that may affect your practice.
Establishing connections and relationships with legal experts can also help you navigate complex regulatory and legal landscapes and they can help with reviewing and updating your policies as needed.
Ethical Considerations
It is important to remember that health practitioners will still have their existing obligations and ethical duties regarding patient care.
Practitioners do not want to expose themselves to potential claims of negligence by accepting AI’s results (i.e. diagnostic decisions) without confirming this and independently assessing it. Similarly, practitioners should be upholding core ethical principles such as promoting patient autonomy and ensuring equitable access to AI technological advancements.
Moreover, medical practices and practitioners should seek to know and apply the ‘Australian AI Ethics Principles’. While these are voluntary principles, they are very important given AI systems used in healthcare have the potential to significantly impact client outcomes in certain circumstances. These principles emphasise transparency, accountability, fairness, privacy protection and security, and individual wellbeing. Following these principles is likely to lead to benefits including increased trust, loyalty and improved outcomes for patients.
Adopting AI technologies can revolutionise your medical practice by enhancing efficiency, accuracy, and patient care. However, it is crucial to implement these tools thoughtfully, after considering potential legal and ethical implications. By adhering to best practices, you can use the power of AI while safeguarding patient trust and compliance with regulatory requirements.
AI is not a lawyer
As the use of AI expands in health, medical practices are predicted to turn to AI to create their agreements, policies or other documents. This can have serious consequences for practices and, although it might seem like a convenient alternative to engaging a lawyer, will likely cost practices more than working with a lawyer in the first instance in the long run. It is vital that these documents be tailored to the practice, to ensure that they are effective and protect your business, the industry and the jurisdiction that you operate in.
We have previously covered why you shouldn’t do your own lawyering (which extends to why you shouldn’t use AI as a lawyer) in our article here. In mid- 2023 a lawyer in the United States used ChatGPT to submit Federal Court findings, citing 6 cases that did not exist. This incident highlights a critical flaw in AI technology; its tendency to hallucinate or produce incorrect information. While we encourage you to embrace innovation and these new tools, we strongly recommend that you don’t lose sight of the value you can receive, and importance of engaging professional advice when you need it.
How can You Legal help?
At You Legal, we specialise in the legal intricacies of healthcare in Australia and understand the challenges and opportunities that come with integrating AI safely into your practice. Our team has extensive experience in providing advice and insights into best practices and we are here to support you in navigating these complexities, ensuring that your practice not only thrives with AI but does so responsibly and legally.
For example, we can help you update or draft your policies, terms of service, employment contracts or services agreements. For tailored advice and legal support in integrating AI into your practice, contact our team here. Together, we can pave the way for a future where technology and healthcare go hand in hand to deliver exceptional patient care.