
At Certchain, hhh we are deeply committed to harnessing the transformative power of artificial intelligence (AI)jkjh to solve complex challenges in the built environment. From regulatory compliance to productivity optimisation and waste reduction, our AI models are designed to drive significant value for our clients.
As the CEO of Certchain, I am often asked about the potential risks associated with AI technology. Conversations around AI safety frequently veer toward existential threats and the potential collateral damage AI could cause to humanity.
While these concerns are valid, they are often driven by fear rather than grounded reasoning. I firmly believe that with the right practices in place, AI can be an immensely positive force, enhancing human capabilities rather than endangering them.
Mitigating AI Risks through Rational Design
My philosophy on AI safety aligns most strongly with that of Dr. Steven Pinker. He argues in his book AI in the Age of Reasoning that focus on AI as a potential existential threat to humanity can be exaggerated and overlooks the potential of clear goal-setting safety checks to mitigate unwanted outcomes.
With the types of models we are pioneering, I am acutely aware of these concerns, particularly when using reinforcement learning (RL) to explore innovative solutions for the industry. The Certchain approach is grounded in the belief that with the right safety protocols and human oversight, these risks can be effectively managed.
Our Four Main Principles for AI Safety
So, what do we do? We live by four main principles to enforce AI safety:
1. Clear Goal Setting and Alignment
We ensure that our AI models, particularly RLs tasked with exploring industry innovations, are guided by clearly defined objectives that are aligned with the broader goals of sustainability, efficiency, and safety. This reduces the likelihood of AI systems pursuing harmful or unintended actions.
2. Bias Control and Fairness
We continuously monitor and address potential biases in our AI models by using diverse datasets and implementing fairness checks throughout the development process. This ensures that our AI systems provide equitable and non-discriminatory outcomes.
3. Human Oversight
We believe that human oversight is critical in maintaining control over AI systems. At Certchain, our customers must approve the actions proposed by our AI models, ensuring that human judgment remains central to decision-making processes.
4. Rigorous Safety and Quality Checks
We have established stringent safety and control practices that include rigorous testing, validation, and continuous monitoring of our AI systems. This helps us identify and mitigate any potential risks before they can impact our customers or the broader industry.
Balancing Innovation and Responsibility
Whilst I acknowledge the potential risks associated with AI, I also recognise that innovation should not be stifled by fear. At Certchain, I have pushed to instil a constant striving to balance safety with the need for progress, ensuring that our AI systems are not only safe and compliant but also capable of driving the radical changes needed to advance the built environment.
By adhering to rational design principles and maintaining a strong focus on safety, I believe that AI can be harnessed to create a better, more efficient, and more sustainable future for the industry. In doing so, I am confident that we are not only addressing current challenges but also paving the way for a future where AI serves as a force for good.