Built by a psychologist, Ecko Health is an AI mental health platform designed to reduce admin burden and support continuity of care between sessions.
Designed as an extension of your clinical self, Ecko learns your style and remembers what matters across your caseload.
Our interview with Ecko’s Clinical Lead explores what it means to use AI in a way that strengthens rather than replaces the human side of therapy.
Interview by David Webb

AI in therapy is developing quickly, often faster than the profession has had time to fully evaluate what it means for clinicians, clients, and the therapeutic relationship.
Rosie Chapple sits at the centre of that conversation. Trained as a psychologist, she now helps shape AI tools at Ecko Health, an AI mental health platform designed to reduce administrative burden, support continuity of care between sessions, and ease pressure on clinicians without compromising the human side of therapy.
In this interview, Chapple reflects on what’s changing, what’s at risk, and how AI can be used in therapy to strengthen, rather than replace, clinical judgment, therapeutic expertise, and human care.
I spent four years on my undergraduate degree, wrote my thesis, and then entered the 4+2 pathway (an Australian route to registration) because I couldn’t afford the cost of my clinical master’s degree at the ripe age of 21. So, by the time I was registered, did my intensive prac, and was actually practising, I’d invested the better part of a decade into becoming a psychologist.
And then I looked around at the landscape I’d spent so long training for, and it no longer existed.
AI had quietly crept into healthcare, technology, and the way people accessed information about their own mental health... and I hadn’t been prepared for any of it. No one had prepared me. Not at university, not during supervision, and not through any professional development. That isn’t to blame anyone, I had fantastic mentors. It’s just the space moved so quickly. So, there I was, as fresh as they get... with no ethical framework, no guidelines, just this powerful new tool sitting in front of me with no manual.
There’s a scene in The Imitation Game where the team at Bletchley Park is trying to crack the Enigma code, and every night at midnight the code resets. All the work they’ve done, all the progress they’ve made, thrown out the window. They have to start again. That’s how it felt, and honestly, still feels. Clients would sit with me in session, spend two to four weeks outside of it, and by the time they came back ChatGPT had already informed them of six diagnostic disorders they had, why they hated their mother, and that it turns out the average teenager needs sixteen-plus hours of sleep… so it was fine they were skipping school?! I didn’t stand a chance. Meanwhile, I’d spent that time making sure I used the correct headings on my clinical notes against my governing body’s guidelines. As I said, not a chance.
It became very clear to me that AI wasn’t a passing trend. It was already reshaping the therapeutic relationship whether we engaged with it or not. The question was never if, it was how. And I didn’t want to be left behind figuring that out.
People assume the hard part is the clinical work: sitting with distress, navigating risk, and holding complexity. And that is hard, of course it is. But what really blindsides early career psychologists is everything around the clinical work.
You’re trained to be a clinician, but the moment you step into practice you’re also an administrator, a note-taker, a scheduler, a billing clerk, and often your own IT department. I remember my parents steering me away from law because they felt my personality wasn’t conducive to billing in 6-minute increments, only to end up doing basically just that!
You finish a heavy session and instead of processing what just happened, you’re scrambling to write up notes before the next client walks in. You’re toggling between three different systems, one for notes, one for billing, one for assessments, and none of them talk to each other. By the end of the day, you’ve spent as much time on admin as you have on therapy.
The invisible struggle is the cognitive load. You’re carrying the emotional weight of your clients alongside the operational weight of the systems you’re working in, and no one really warns you about that during training. It leads to burnout, and it leads to it fast. I’ve seen brilliant early career psychologists question whether they chose the right career, not because they don’t love the work, but because the administrative work is crushing them. Bear in mind I’m a late 90’s baby, so I am hardly your classic 85-year-old Janet trying to operate FaceTime with the grandkids for the first time... I grew up with technology, it shouldn’t feel hard.
I got into psychology for two reasons. First, people. Human connection and the chance to make small but meaningful differences in someone’s life. That seemed like something I’d feel pretty happy spending my days doing. Second, I think because of the neurological aspect. The brain has always fascinated me. It’s like having the world’s coolest toy, the fastest car, the most advanced spaceship, with no manual. I wanted to understand the manual.
AI feels like the same thing. I was given this incredible new tool, and there was no manual for it either. No ethical code, no guidelines, no clear pathway for how a clinician should think about it. And I’ll be honest, there was real hesitation. What does this mean for my profession? How do I use it without compromising what I care about?
When the opportunity came to be part of actually building these tools, I thought, what better way to learn? If you can be there for the making of something, the manual becomes redundant. Or at least, you get to help write it.
Day to day, my role is about making sure that what we build actually reflects how clinicians think and work. I sit in the gap between the technology and the therapy room. That means testing features against real clinical scenarios, pushing back when something doesn’t feel right from a practice perspective, and making sure we never lose sight of the fact that these tools exist to support clinicians, not to impress investors with automation.
The clinical experience isn’t an add-on. It’s the foundation. If you don’t have clinicians involved in the design of clinical AI, you end up building tools that look good on a pitch deck but fall apart the moment a real therapist tries to use them.
AI should never replace therapy. It can assist in providing access, offer between-session support, and reduce the administrative weight that stops clinicians from being fully present. But there should always be a hand on the pulse of the human side.
What concerns me isn’t really the technology itself. It’s the incentive structures. I don’t want to see a world where people talk to an avatar they’ve self-curated to serve their every need, someone who never challenges them, never sits in uncomfortable silence with them, and never gets it slightly wrong in a way that leads to something real. I don’t want to see people speaking to that avatar day and night, building a relationship with something that will never truly know them.
I fear building systems that encourage people to stay inside of them. To never leave. The most effective technology in mental health should do the opposite, it should encourage people to go outside of it. To check in and check out.
That’s how I work with my clients. I say the solutions are out there, in the real world. It can be found during long walks, cold swims, heart-rate-increasing early morning exercises and slow breathwork evenings. The belly laughs with friends, the nervousness of firsts, the thrill of mastering a skill you never thought you had. The honour is to be a human, and to get to feel it all... the good, the bad, the horrid and wonderful. AI could never. And never should try.
So where do I draw the line? AI can surface patterns, prepare clinicians, reduce admin, and maintain continuity between sessions. But the therapeutic relationship, the space where someone sits across from another human being and is truly seen, that’s sacred. And the goal of any AI in mental health should be to get people back into their lives, not deeper into a screen.
On the overestimation side, there’s a fantasy that AI can replace the therapeutic relationship. That you can build a chatbot, train it on enough clinical data, and suddenly you have a therapist. You don’t. Therapy works because of human connection: those micro-moments of attunement, the silence a clinician knows not to fill, and the way a client feels genuinely seen by another person.
On the underestimation side, which I think is more dangerous, people dismiss AI entirely because they fear it. They assume that any involvement of AI in mental health automatically means we’re replacing clinicians or compromising care. But that misses the enormous opportunity to use AI for the work that surrounds therapy. Note-taking, pattern recognition across longitudinal data, and the administrative burden actively driving clinicians out of the profession. It’s also driving waitlists. We have large greyed-out blocks in our calendars to tackle admin, yet a client calls, and we have to say we have no available slots. Therapists have to do it, I mean we can’t fill every waking hour face-to-face. But if AI allowed even one more person to get through a therapist’s door each day, that is enough for me.
I think the truth is somewhere in the middle. AI isn’t going to be a therapist. But it can be an extraordinary tool for therapists, if we engage with it thoughtfully rather than reacting out of fear. Without curiosity, we’re only left with judgement, and judgement born out of fear of the unknown is not a space we want to be making decisions from in health tech.
This one matters to me deeply because it goes right to the heart of why most of us entered this work: to understand the person in front of us.
When a client’s information is scattered across different platforms, something gets lost. Not just data, but context. You might have their assessment scores in one system, their session notes in another, their treatment plan buried in a Word document somewhere. When you sit down before a session and try to form a picture of where this person is at, you’re piecing together fragments rather than seeing a whole person.
And the psychological risk of that is real. You miss patterns. You forget that something a client said three months ago actually connects to what they’re telling you now. You end up doing a kind of “catch-up therapy” at the start of every session because you’ve lost the thread between appointments. That’s not just inefficient but it’s a clinical risk. Clients can feel it too. They notice when you’ve forgotten something, or when they have to repeat themselves. It erodes trust, and trust is the entire foundation of therapeutic work.
Fragmentation doesn’t just affect admin. It affects the quality of care.
Before a session, it might look like receiving a brief that tells you what’s changed since the last appointment, what patterns are emerging, what the client has been working on between sessions, where there might be a shift in their presentation that’s worth exploring. Instead of spending the first twenty minutes catching up, you can walk in already oriented.
During a session, it might mean having your notes captured in the background so you can actually be present with your client instead of dividing your attention between the person in front of you and the documentation you need to complete.
After a session, good AI use means your progress notes are drafted for you to review and sign off on, not generated without your oversight, but presented in a way that saves you thirty minutes of admin per client while keeping you firmly in control of the clinical record.
The key is that the clinician is always the decision-maker. AI handles the administrative busywork; the clinician handles the clinical thinking. When it works well, it gives psychologists back the thing they got into this profession for; time to actually be with their clients.
The Clinical Double is essentially a personalised AI counterpart built around each individual clinician, not a one-size-fits-all model. It learns your therapeutic style, your tone, the way you approach different presentations. Over time, it becomes a genuine extension of your clinical practice. For example, my Ecko account would look very different to my colleagues, and that’s the point.
It is designed to do a few things. It sits in on sessions and retains full context across the entire client history, which means it can surface patterns you might not catch across hundreds of clients. It prepares intelligence briefs before each appointment so you’re not walking in cold. And between sessions, it can engage with patients in a way that’s consistent with the clinician’s own approach, not as a replacement for therapy, but as continuity of care. Checking in on goals, reinforcing techniques, maintaining the therapeutic thread between fortnightly appointments.
The boundary is critical though. The Clinical Double doesn’t make clinical decisions. It doesn’t diagnose. It doesn’t replace the therapeutic relationship. It’s a tool that amplifies the clinician’s capacity, like having a second brain that handles the things you wish you had time for but never do.
Clinicians should think of it the way a surgeon thinks about imaging technology. The MRI doesn’t perform the surgery. It gives the surgeon better information to work with. The Clinical Double doesn’t do the therapy. It gives the therapist better conditions in which to do it.
I think the argument I hear a lot is, “I am worried it is going to make me lose XYZ skills”.
Personally, I think the responsibility to stop that lies with us. Just because we drive cars doesn’t mean we should stop walking. As humans we have constantly invented, and those inventions have allowed us to progress in so many ways. However, they have also risked regression in other ways; fortunately, we have some say in that outcome. Just because we have wheels doesn’t mean we stop using our legs; similarly, just because we have AI doesn’t mean we should stop applying our clinical thinking and professional judgement.
This accountability starts with transparency. Clinicians need to know exactly what the AI is doing with their data and their clients’ data. They need to be able to see the reasoning, review the outputs, and override anything that doesn’t sit right. The moment a clinician feels like the tool is making decisions for them rather than supporting their decisions, that’s when the ethical line has been crossed.
And from a systemic perspective, we need regulatory bodies and professional associations to step up and provide clear guidance. Clinicians shouldn’t have to figure out the ethics of AI on their own. I have personally reached out to my governing body in Australia but received no reply. It’s really sad. I understand I’m a small fish trying to get into Neptune’s aquatic castle... but it would be good if clinicians could talk with the people in charge. In building my platform, I have made a point of talking with hundreds of clinicians so I represent their wants and wishes. I don’t know, maybe I’m simplifying it. But it feels like it should be the same.
Because they’re the ones most affected by this shift, and the least equipped to navigate it.
The core issue is experience. A psychologist with twenty years of practice reads an AI-generated formulation and is far better equipped to spot what’s off. An early career psychologist, who has spent the better part of their education in a textbook, might not, and not because they aren’t capable, but because they haven’t sat with enough clients to develop that instinct yet. And these are the people most desperate to stay on top of things. They’re often managing caseloads they had very little say in building, drowning in admin, and just trying to keep their heads above water. Hand them a tool that drafts their notes, and of course they’ll sign off on the output more quickly, not out of carelessness, but out of survival.
That’s the real risk. It’s not that early career psychologists will reject AI. It’s that they’ll adopt it without the experience to know when it’s wrong. Universities, supervisors, and employers have a responsibility to teach AI literacy alongside clinical skills. To be honest, this shouldn’t just be in psychology. AI literacy should be in schools, across every sector. I don’t see any field that won’t be affected.
It requires three things: clinical governance, transparency, and humility.
Clinical governance means that clinicians aren’t just consulted during the design of these tools, they’re embedded in the design. Not as advisors who get a polite email every quarter, but as co-builders who shape how the technology works at every level.
Transparency means that clinicians and clients both understand what the AI is doing. What data is it using? How is it generating its outputs? Where is the information stored? Who has access? These aren’t nice-to-haves, they’re non-negotiable. Mental health data is among the most sensitive information a person can share, and the standard for handling it should be the highest in any industry.
And humility means acknowledging what AI can’t do. Every responsible AI company in mental health should be able to clearly articulate the limitations of their technology, not just the capabilities. If a company can’t tell you what their tool isn’t good at, I’d question whether they really understand the clinical environment they’re building for.
In practice, safe AI also means alignment with existing professional standards, AHPRA guidelines, privacy legislation, and the ethical codes that already govern clinical work. AI shouldn’t exist in a regulatory vacuum. It should be held to the same standard we hold ourselves to as clinicians.
Start with your admin.
Use AI to help draft your progress notes after a session, not to write them for you, but to give you a first draft that you then review, edit, and sign off on. Most clinicians spend thirty to forty-five minutes per client on documentation. If AI can cut that to ten minutes of review time, you’ve just reclaimed hours of your week.
It’s low-risk because the clinician is still the final authority on the clinical record. You’re not handing over clinical judgement, you’re handing over the first pass at a task you were going to do anyway. And the quality check is built in because you’re reviewing every word before it becomes part of the client’s file.
It’s also a great way to start building familiarity and comfort with AI in a clinical context. You learn where it gets things right, where it misses nuance, and what kind of oversight it actually needs. That experience is invaluable as AI becomes more embedded in practice, you develop an informed, critical relationship with the technology rather than either fearing it or trusting it blindly.
Start small. Start with something you already find tedious. And make sure you stay in the loop.
I think within the next few years we’ll see AI move from being a novelty or a source of anxiety to being a standard part of the clinical toolkit; I guess in the same way that telehealth went from being a pandemic workaround to being an accepted mode of practice.
The biggest shift will be in continuity of care. Right now, therapy mostly happens in fifty-minute blocks, often monthly in Australia. Between those sessions, clients are essentially on their own. AI is going to fill that gap, not by replacing the therapist, but by maintaining the therapeutic thread. Checking in on goals, reinforcing skills, flagging changes in presentation. The clinician stays in control, but the client feels supported between appointments in a way that’s never been possible before.
I also think we’ll see AI dramatically reduce the administrative burden that’s currently driving clinicians out of the profession. Workforce retention is a crisis in mental health. If we can give clinicians back even a few hours a week by automating the operational side of practice, we keep more people in the profession and more clients get access to care.
The piece I’m most hopeful about is clinical intelligence, AI that helps clinicians see patterns across a client’s entire treatment history that they might miss at the moment. Not replacing clinical judgement, but augmenting it with longitudinal data that no human brain can hold in working memory across hundreds of clients.
Evolution won’t happen overnight, and it shouldn’t. But the direction is clear: AI that makes clinicians better at what they already do, not AI that tries to do it for them.
Look, I am an early career psychologist. So I won’t attempt to impart wisdom, but instead I will encourage readers to do something I am also having to do myself; be curious.
I think that would be my advice for most things, really. Ask, and then ask again. Form an opinion, become unwaveringly sure of it, then go back and find all of the evidence against it. Once you’ve figured it out, know that you haven’t, and do the whole thing in reverse.
Because here’s the thing: the psychologists who engage with AI now, who learn how it works, who push back on what needs pushing back on and embrace what genuinely helps, are the ones who will shape this technology. If clinicians step back and leave AI development to engineers and investors alone, we’ll end up with tools that don’t understand the therapy room. And our clients will be the ones who pay for that.
You don’t need to go and build a whole piece of AI infrastructure as I have, I fell down a rabbit hole that I happen to love, but I can see how many would think “that is taking ‘learning’ to the extreme”. What you can do on a really basic level is join people’s beta testing, reach out to products, join committees. Because if you’re there for the making of something, the manual becomes a lot less scary.
The central message of this interview is that AI in therapy should not be understood as a replacement for psychologists, therapists, or the therapeutic relationship. Used responsibly, AI can help reduce administrative burden, support continuity of care between sessions, and give clinicians more time and context for the work that depends on human judgment, therapeutic skill, and professional care.
For clinicians, the challenge is not simply whether to use AI, but how to use it ethically, transparently, and in ways that keep professional judgment firmly at the centre of mental health practice.
If you’re a psychologist, therapist, or mental health clinician interested in how AI can reduce admin burden, support continuity of care, and help clinicians stay focused on the human side of therapy, you can explore Ecko Health and create a free account.
You can also reach out to Rosie Chapple on LinkedIn to learn more about her work at the intersection of psychology, clinical practice, and AI in mental health.