Beware...AI is not a tool.
This article was written by our design director Paul Bulkeley…with a little help from Google Gemini. It explores the implications of Artificial Intelligence use on professional practice.
Artificial Intelligence is not a tool. A tool, no matter how sophisticated, is merely an extension of the user. It is not autonomous in any way. Outcomes are always predictable. Whatever the tool outputs, the user remains responsible. Its actions cannot be separated from your actions, no matter how geared the outcome may be from your input. CAD, for example, does what you tell it to do. Nothing more and nothing less. Even when it creates a schedule from a 3D model it is a rationale, linear, algorithmic outcome that is created. Rubbish in, rubbish out… not so with AI.
AI is not like the tools that drove all previous revolutions. It takes your instruction and acts independently, creatively even. There is, therefore, a physical separation between you and the outcome produced by the AI. On this basis it is a third party. Something that sits between a tool and a sub-consultant or assistant. This distinction means that as providers of professional services those who use AI are engaging with something new. Potentially something unprofessional.
When we use AI we are taking a risk…one that is hard to quantify…inevitably we use it to save us time but just as often we use it for things we don't fully understand and tap into its greater knowledge base. This makes editing it’s work tricky. We don’t know what we don’t know so can’t easily spot its mistakes. If we mistakenly think of it as a tool, we consider its outputs to be accurate. It is not, or at least not consistently. This requires users to apply a level of due diligence that in many cases will be contrary to the very reason they decided to use AI in the first place. Taking the time to fully check its work is unlikely to occur when the purpose of using it was to save time and access knowledge that we don't already have at our fingertips. This exposes professional users of AI to potentially significant risks.
A recent report prepared by Deloitte for the Australian Government was found to contain significant errors. It was revealed that it had been created, at least in part, by AI. It exposed the company to significant commercial embarrassment and claims, with the company pre-emptively refunding a proportion of their fee. That will probably not be the end of the consequences for Deloitte. The report, commissioned by the Department of Employment and Workplace Relations (DEWR) for $440,000, was found to contain significant errors, including fabricated academic references and a false quote attributed to a Federal Court judge. The root of the problem was identified as being AI’s propensity to have ‘Hallucinations”. It is an assistant but an assistant with a propensity to lie. Nothing is more unprofessional.
The outcomes AI produces will always be plausible but can be fabricated and therefore fraudulent. AI is more like an insecure colleague who wants to please you, even if the ethical and professional thing to do is to acknowledge professional ignorance. This propensity to make things up rather than admit to ignorance is fundamentally unprofessional. It goes against the ARB Code of Conduct 1.1 and 1.2 which state that ‘Architects must be honest and act with integrity and will meet the standard when they: Use impartial judgement based on the evidence available’ and ‘promote their services responsibly and accurately.’ As a minimum, architects must inform their clients that they use AI in the course of their work and should be able to refer clients to a company policy that sets out the way in which AI is integrated into their workflow. If the outputs of our work are found to be full of errors, it is doubtful that this will carry any weight.
The use of AI also exposes professionals to a risk of contravening Standard 3 of the code, which requires architects to ‘work within the limits of their competence’ and requires them to assure themselves of the competence of anyone they engage to carry out work on their behalf. Does this include outsourcing to AI and if so, how does one meaningfully determine the competence of AI? This is a requirement enhanced further if they are Principal Designer, responsible for assessing the competence of the design team.
Conflicts with the ARB Code of Conduct go further still. ARB Code 2.1 and 2.3 places a duty to act in a way that ‘supports the environment’. The most transparent data to date, provided by Google on their Gemini AI suggests; ‘In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity, the equivalent of running a standard microwave for about one second.’ This is clearly not eye watering and certainly not enough for most of us to think twice about the ethical implications of AI use. The collective and accumulative impact is however material. According to Google AI ‘generating an image can use the same energy as charging your phone.’ The process of satisfactory AI image generation can, in our experience, require significant numbers of iterations. A single large or AI-focused data centre can reportedly consume 50 MW or more, with some of the world's largest requiring more than 100 MW. To put this into context, Hinckley Point C will generate 3,200 MW which could power around 6 million homes or as little as between 32 and 64 data centres, depending on their size. That means that each data centre consumes the equivalent of between 93,750 and 187,500 homes. This is equivalent to all the homes built in the UK in a typical year. The energy consumed in an individual AI search may seem largely irrelevant. Our wholesale adoption of the technology is hugely significant. With limited energy supplies and even more limited grid connections available we may have to choose between AI and homes. These are real ethical dilemmas facing society.
In addition to the potential breaches already outlined, Standard 4.5 requires architects ‘to maintain accurate records of their work and decisions.’ Which may be hard when all the thinking is done by an external AI. Standard 4.6 goes further and requires architects to ‘protect the confidentiality and security of the information for which they are responsible,’which cannot be fully upheld if you are sharing project documentation with the AI, to brief and instruct it on the data that is relevant to your specific task. Beware, your client’s confidential information may soon appear in someone else’s AI searches. AI forgets nothing and seems to have no boundaries on what it will take and recycle. When someone searches AI for new developments in their area and your clients still confidential proposals come up, your client may well look to place blame for the leak!
Based only on compliance with Codes of Conduct the use of AI is looking at best challenging and potentially represents a breach of professional standards.
What about compliance with professional services contracts? The RIBA Professional Services Contracts require architects to obtain client consent before appointing sub-consultants to perform services. The contracts also stipulate that sub-consultants are required to be aware of and not cause the architect to breach the terms of the main contract with the client. This suggests that an architect would need to inform their client of their intention to use AI as this is in effect outsourcing to a personal assistant on matters of substance and arguably sub-contracting. I’m sure the courts will soon arbitrate on whether this is the case. Standard agreements also include obligations, such as keeping project data secure and complying with data protection laws, that may also be breached through engagement with AI, as outlined above. This reinforces the need for an AI Policy that communicates your companies approach to AI use and any boundaries on its application.
Another key question is whether the use of AI complies with the requirements of your Professional Indemnity Insurance? Shanna Renaud from Lockton outlines the risks and reinforces the need for a robust AI Policy. She writes:
‘Without the appropriate governance policies, risk mitigation, and qualified personnel in place, these issues can have severe consequence. Errors may lead to project delays and introduce additional costs – especially if not discovered until later in the construction process. For the professional involved, it can lead to financial and reputational harm. If they are found to have been negligent in their duties (by relying upon AI tools that produced negligent design), they could also be subject to claims against their PII…It is highly recommended that firms who are incorporating AI into their operations draw up a clear usage policy – to set out protocols, ensure human oversight, and establish accountability should something go wrong. It should be regularly reviewed and updated as new risks emerge. That policy should be understood as one component within a broader framework adopted across the business – embedding governance, accountability, and oversight at every level…Firms must also recognise that AI misuse may not only expose them to professional negligence claims. They could find themselves in breach of wider laws and regulations such as the EU AI Act, data protection and privacy regimes, or sector‑specific compliance requirements.’
This is both wise and sobering advice and suggests that a measure of caution and professional judgement is essential to responsible AI use in practice.
Finally, as with any software we use, it is important to understand the limitations imposed by its terms and conditions. Can, for example, an employee’s personal AI account be used for commercial purposes? When a colleague asked GPT5 whether it could be used for commercial purposes it responded as follows:
Yes — you can use me (GPT-5) for commercial purposes ✅
That means you can legally use my outputs in work you sell, publish, or deliver to clients — including:
• architectural reports, drawings, and specifications
• business plans, proposals, or emails
• marketing materials, website text, or social media content
• creative or design work (graphics, copywriting, etc.)
However, there are a few important conditions to keep in mind:
1. You own the outputs, unless they include content that’s third-party copyrighted (e.g., if I quote a book or use brand logos).
2. You’re responsible for use — meaning you should review all content for accuracy, compliance, and suitability before publishing or submitting it.
3. Don’t imply OpenAI endorsement — e.g., don’t market a service as “powered by ChatGPT” or “certified by OpenAI” unless you have explicit permission or API terms allowing that.
4. If you use images or designs I generate, you can use them commercially too, as long as they don’t depict real identifiable people or infringe existing brands or trademarks.
In principle this sounds positive. It may of course be that GPT5 just wanted to please my colleague, a regular user of its services!
Even if AI struggles to act with humility and has a tendency to tell us what we want to hear its human creators are clearly keen to ensure we understand its limitations. ChatGPT clearly states below every search ‘ChatGPT can make mistakes. Check important info.’ This is wise advice…
At the top of this article, you will have seen an AI generated image. When asked to ‘create an image of AI helping an architect design a building’, Chat GPT created an image of a robotic assistant pointing a young female designer, who is amusingly still drawing by hand, to a better way…its own way. This raises a key consideration. Whose design is it? Ours or the AI? At Snug Architects this is not a distinction that concerns us, we are a team and therefore collective authors of our work. As the image depicts, AI is just another member of the team. A very enthusiastic and productive but still very junior member of the team. Our relationship with AI will no doubt evolve but for now we remain both responsible and in control. We are the lead designer, or so we like to think.
We asked AI to update the image at the top of this article and ‘change the image so that the person is instructing and the robot is doing the drawing work.’ It paused for a while and this is what it created… it would not, could not, didn’t change a thing!
At Snug Architects we have sought to engage proactively and ethically in the opportunities provided by Artificial Intelligence. This has involved workshops and discussion with our team and an opensource attitude to discoveries our team make in their own explorations into AI. It has been important to take what many were doing in isolation and within their personal workflows and make this more accountable, not least by senior management, allowing collective discernment of whether it is helpful to the wider team, ultimately productive and above all, ethical. What is emerging is a company protocol on the use of AI. One that is created by us….with a little help from AI!