LUNCH AND LEARN: EPS Reporting

In our second RxWeb Lunch and Learn Webinar, our Training & Installations Manager, Maggie Rabel provided an overview of RxWeb EPS reporting.

 

https://www.youtube.com/watch?v=wU02pdyawYU

 

RxWeb is designed to be the digital foundation of your pharmacy. We are proud to be the UK’s only web-based Patient Medical Record (PMR) system that exceeds the needs of all pharmacy types and sizes. RxWeb is a simple and straightforward system providing users with fast and intuitive workflows to streamline the whole dispensing process and other areas of pharmacy management.

Our system beats our competitors because it enables you to manage the day-to-day operations of your pharmacy, seamlessly running clinical services, patient communications, stock management and robot integrations allowing you to focus on patient care.

If you’re not currently using RxWeb but would like to learn more about the system we recommend booking a demonstration or downloading a brochure at the links below.

Click to Book a Demo           Click to Download a Brochure

In the meantime, if there’s anything else we can help you with, let us know! You can find our contact details here.

 

Clanwilliam Health Update on the HSE Ransomware Attack

While the HSE work through the impact of the recent ransomware attack on its IT systems we want to assure all customers and users that no Clanwilliam Health system has been affected by this attack. We have undertaken a thorough investigation of our system infrastructure and where necessary we have restricted access to all HSE systems while they work through their recovery and restoration processes.

In addition to these steps, we have also conducted risk assessments for all of our systems and will continue to closely monitor the situation, assisting both the HSE and all of our customers however best we can.

We will continue to notify customers directly, through email, as soon as we receive updates from the HSE that their systems are back online and safe to use.

Dictate IT Provides GP Market with AI-Powered Speech Recognition Solutions

Dictate IT, part of healthcare technology company Clanwilliam Group, today confirmed its plans to enter the UK GP and primary care market with its AI-powered speech recognition products for the first time.

With over 30,000 clinical users and almost 20 years’ experience of supplying NHS Trusts with digital dictation and transcription software, Dictate has been developing speech recognition in their AI labs since 2014. The company is the only UK supplier to have developed its own cutting-edge deep neural net based medical speech recognition engine, enabling unmatched highly-accurate speech recognition for UK medical dictation.

Already widely used in NHS Trusts, Dictate IT has been trialling two of its speech recognition products with GPs across the country for four months, with strong results. Both products are designed to save valuable time and increase efficiencies.

Dictate Swift is a workflow-based speech recognition solution designed to support existing letter production processes. Doctors securely dictate from their iOS or Android smartphone; the medical letter is then made available via the web-based application for their administrative staff for final review and completion. The software facilities remote working and is also integrated with EMIS and TPP clinical systems.

“We have been extremely satisfied with Dictate IT – the product works really well transcribing speech to text for medical letters near flawlessly. This significantly reduces secretarial typing times and allows them to focus on other tasks that are always adding to their workloads. The ability to remote dictate using smartphone app securely is wonderful; it is so intuitive and easy to use that even our most IT wary colleagues took to it easily!”

Dr Matt Best, from Yelverton Surgery

Dictate IT’s second product offering in the market is Dictate Live which provides immediate speech-to-text conversion. GPs simply place their cursor where they want the text to appear, and dictate. Their voice is picked up either from their desktop microphone or Dictate app making the process completely seamless. Dictate Live works with any 3rd party system, including EMIS Web and SystmOne. Research has found that, in general, people can speak three times faster than typing. Therefore, using Dictate Live provides enormous potential for time saving on clinical note capturing.

“Dictate Live is very quick and writes into any programme a GP might use. It saves me an hour per day.”

Dr Andrew Sharpe, from Ashley Centre Surgery

“We are unique in that our proprietary AI-based speech recognition technology was built specifically for use in the NHS. Our proposition for GPs is simple – cost effective products that deliver immediate benefit, will little or no implementation effort required. Our products don’t require any voice training or hardware and cover a wide range of accents found across the country. We are providing a three-month trial to allow users to see for themselves how accurate our speech engine behind Dictate Swift and Dictate Live really is.

Additionally, we continue to expand our provision of speech recognition into secondary care settings, therefore we expect to soon have multiple regions where we provide end-to-end digital clinical correspondence services. The resultant integrated approach will radically expand the scope of benefits that we will be able to bring to our NHS customer base.”

Rob Hadley, Commercial Director of Dictate IT said

Dictate IT is offering GPs

Dictate IT is offering GPs a free three-month trial of Dictate Swift and Dictate Live, with no installation cost. To find more and book a free trial


Get started

  • Share this article:

  • Click to copy

Southport and Ormskirk Hospital NHS Trust successfully implements Bluespier Theatres

Bluespier Theatres went live on 22nd March 2021 across all theatres at the Trust replacing the Trust’s previous Theatre Management System, Galaxy DXC. Bluespier Theatres is fully integrated with the Trust’s EPR, Careflow, ensuring a seamless user experience. Elective and emergency theatre bookings can be scheduled and managed directly from within System C’s Patient Administration System. The wider theatre record can also be accessed directly from within Careflow, which allows for end-to-end management of the theatre journey.

Along with other recent implementations the project was delivered through a difficult period for the NHS, with the Trust in the mist of a second covid wave. This however did not affect the scheduled go live date and the determination and collaborative working between Bluespier and the Trust ensured a positive go live.

Following the install of the core Theatre system, we are looking forward to working with the Trust to further embed more functionality and help the Trust reap the benefits as an organisation.

Paul Chadwick, Head of IT for Southport and Ormskirk Hospital NHS Trust, said: “To deliver the best care for our patients, it is vital that our Theatre and clinical teams have access to all the information they need at the point of care. Through its direct integration into the Trust EPR, Bluespier provides this. The implementation has been a great success through a collaborative effort. We look forward to building on this going forward and continuing to provide high quality care for our patients.”

Stuart van Rooyen, Managing Director of Bluespier, said: “The implementation of Bluespier Theatres into Southport and Ormskirk Hospital NHS Trust has been a huge success due to the vast efforts and hard work of Trust and Bluespier staff. I’m delighted the Trust are looking to further utilise other Bluespier modules including our new mobile application, Bluespier Mobile. We look forward to a long and successful partnership working collaboratively with the Trust and System C improving both efficiency and patient care through utilising fit for purpose technology.”

LUNCH AND LEARN: REPORTING

In this, our first DGL Practice Manager Lunch and Learn Webinar, our Service Delivery Leader, David West brought attendees through all of the reporting functionality within DGL. We looked at areas around patients, billing and accounts while also showing users where to find reports and how the key ones work e.g. filtering/adding columns.

Please feel free to share this with your wider team and colleagues if you think it will be of use.

 

 

Speech Recognition from Audrey to Alexa – A Brief History

Speech Recognition is a technology that has fascinated and disappointed doctors for more than 25 years. Dictate IT has been developing Speech Recognition solutions in our AI labs since 2014 and outlined here is a brief history of the science behind the technology and a reflection on why it might be the right time to give it a second look.

The ability for machines to recognise and respond to human speech has been a desire since the outset of computing. Early computer scientists wished they could interact with their creations as they did with their colleagues – by talking. 

The Post War Period – The birth of the Computer Age

The first machine capable of recognizing human speech was invented in 1952 and named ‘Audrey’ by Bell Labs in the US. She could recognise spoken numbers from 1 to 9. Ten years later, IBM released ‘Shoebox’ which had the ability to recognise simple calculations and input them into a calculator. In the UK, scientists worked to improve recognition using statistical information concerning allowable phonemes in English, and in the USSR they pioneered dynamic time warping to cope with variations in speaking speed. Ask typists in your surgery about the variations in the speed of speech patterns and you will understand the significance of this.

Progress Slows after a promising start

By the 1970s progress had slowed, hampered by the idea that to improve recognition, machines would need to ‘understand’ speech, something that turned out to be unnecessary for the task of recognition and something which still eludes us today.

The late 70s and early 80s saw the introduction of two key new approaches: n-gram language models and Hidden Markov Models (HMMs).

N-gram language models describe the probability of a sequence of words, and are often contextual dependant. For example in the medical dictation domain, the tri-gram “This charming gentleman” is more likely than “This charming pineapple”. This probability allows speech recognition to go beyond just the phonetic information in the audio.

Hidden Markov model - Wikipedia
Hidden Markov model

 

Hidden Markov Models are variants of techniques developed in the 1960s to aid prediction in the US defence industry, in turn based on maths outlined by the Russian Andrey Markov in the middle of the 20th century. Markov models aim to simplify prediction of a future state by only using the current state, rather than needing to use many prior states. Adoption of HMMs for speech recognition, coupled with the increases in computer power needed to feasibly run them produced huge leaps in accuracy and vocabulary size. HMMs continued to dominate speech recognition approaches for the next 25 years.

During the 90s and 00s, the PC enabled HMM-based speech recognition to become more widely available to consumers. Accuracy continued to improve, though began to plateau in the early 00s and still required a degree of per user training and manual correction, based on a speaker-dependent individual profile. Thus speech recognition acquired a slightly jaded reputation as being ‘not quite good enough’ for normal usage. When you last tried speech recognition on your PC to dictate a medical report you probably used a system that used an HMM acoustic model. The results would have been interesting – but not good enough and you probably concluded that it was not for your practice.

 

Enter the Neural Network and Machine Learning 

Artificial Neural Networks (ANNs) were first described in the 1940s, and are networks of nodes and connections inspired by the workings of biological neurons. As with real neurons, as the network ‘learns’ some connections between nodes become stronger, some weaker. The difference from classic computer programming was that ANNs ‘learn’ by themselves rather than being driven entirely from hand-crafted rules given to them by their human programmers. It wasn’t until the 1980s that computing power was sufficient to realise the theoretical technique and interest in neural networks surged with hopes of (strong) Artificial Intelligence based on this biological model. The concept was applied to tasks like speech recognition, but without much success compared to the dominant HMMs. General interest in ANNs declined.

However, in the early 00s, a specific kind of ANN method called Deep Learning began to emerge as a potentially superior alternative. In particular, a collaboration between researchers at Google, Microsoft, IBM, and the University of Toronto showed how Deep Learning techniques could bring significant improvements to many areas including speech, image, and handwriting recognition.

Deep Learning

Deep Learning uses Neural Networks that are ‘deep’ by virtue of having multiple layers of nodes between their input and output layers. In speech recognition the input being a segment of audio and the output a piece of text. Each layer ‘learns’ to transform the input to the output in a slightly different way. In 2009 a researcher at Google realised that by using Graphics Processing Units they could massively speed up the training of Deep Neural Networks (DNNs), dramatically shortening the time taken to experiment with new models. By 2012 it was clear that DNNs were outperforming old approaches in multiple fields and this kicked off the huge industry interest and public awareness about the use of ‘AI’. 

The average WER results for speech types using ASR classification... | Download Scientific Diagram

DNNs are now used by all the major consumer speech recognition products you may be familiar with: Siri, Alexa, Cortana, Google Home/Nest, etc.

 

Dictate IT Neural Net Stack

Dictate IT began developing its own Deep Neural Network-based speech recognition in 2014. We have always focused on UK medical report recognition. The state of the art is changing constantly, but we currently use two kinds of neural networks:

  • An acoustic model based on a factorised Time-Delay Neural Network (TDNN-F)
  • An AWD-LTSM language model (aka an ASGD Weight-dropped Long Short Term Memory model

This allows us to provide unmatched highly-accurate speech recognition for UK medical dictation, with no training period required, while covering a wide range of the accents found in the NHS. If you’ve not used medical speech recognition in the last few years, we think you will be impressed by the improvements in the field.

 

Contact us for a free trial


Get started

  • Share this article:

  • Click to copy

University Hospital Plymouth NHS Trust goes live with Bluespier Theatres across entire Trust.

Bluespier Theatres was successfully implemented on 10th February 2021 across all theatres at the Trust replacing paper processes. The new electronic approach ensures surgical information is captured electronical in real time as part of current theatre workflows. Bluespier has been integrated with the current IPM PAS to ensure scheduling is standardised, can be completed accurately and at ease for a seamless user experience.

The implementation was a great success, given the challenges on the NHS caused by the current pandemic and the go live being in the mist of the largest peak and national lockdown.

Following an initial 6–8 weeks where the system is embedded at the Trust, Phase 3 will commence, where additional functionality will be implemented and further clinical benefits realised.

Cindy McConnachie, Senior Matron at University Hospital Plymouth, said: “Bluespier Theatres will allow us to commit further to safeguarding patients on surgical pathways. The whole change management process has not been without its challenges… despite this we have made the change and rapidly.

The support of the team at Bluespier has been outstanding. We were supported every step of the way, challenged when we had doubts. The Bluespier team ensured our transition onto Bluespier was made with minimal interruption to services and that patient safety was maintained.

We look forward to continuing to work with Bluespier and would like to thank them for the ongoing support of our teams. We also look forward to realising all the potential that Bluespier Theatres can bring to our service in the future.”

Stuart van Rooyen, Managing Director of Bluespier, said: “We are delighted to have been selected to provide our theatre management software to Plymouth and are proud of the seamless rollout despite the additional operational challenges due to the global pandemic. I’m grateful for all the additional time and resource the Trust have given this project on top of existing workload in what has been an extremely challenging time for NHS staff. I’m excited to continue working with the Trust to implement further phases and build on the great relationship we have.”

My Aged Care e-Referrals free up precious time for healthcare workers

From one end of Australia to the other, My Aged Care e-Referrals are saving health professionals precious time so they can focus on what really matters – looking after patients.

The Australian Government’s My Aged Care service is the entry point for older Australians to access government-funded aged care. General practices play a key role in supporting patients to access these services.

My Aged Care introduced e-Referrals to support practices by making the process easier for healthcare workers around the country to refer their patients for an aged care assessment.

Chandler’s Hill Surgery near Adelaide in South Australia was part of the 2019 pilot programme trialling My Aged Care e-Referrals (which are powered by HealthLink) and has continued using them ever since.

Nurse manager Casey Franchi, who’s worked at Chandler’s Hill Surgery for five years, uses e-Referrals on a regular basis. The surgery uses the Best Practice PMS (practice management system).

“They are so easy to use and save so much time. All the patient details are pre-populated, which makes filling it out so much faster, more accurate and more efficient.”

Prior to e-Referrals, Casey used the My Aged Care website to send referrals.

“The website referral is time-consuming compared to e-Referrals because you have to manually type all the patient details in from their file, which can also lead to transcription errors.”

Casey says e-Referrals are so quick and easy to use, she can fill them out during a patient consultation and ask the patient questions while they are there.

“With the website, because it was time-consuming, I’d have to fill the referral out after the patient had left because it required a big chunk of my time to do it. And then if I needed to ask them a question, I’d have to contact them.”

Another benefit of e-Referrals is how easy it is to attach files.

“The e-Referral is in the patient file so you can easily and securely access any documents to upload to back up the referral.”

She says she can’t imagine life without e-Referrals.

“Life is so much easier now and I would recommend them to any general practice. The time saved means I can focus on the important stuff like caring for patients and not on filling out forms.”

My Aged Care director of online services and communication Kylie Sauer says e-Referrals have improved healthcare workers’ experience by offering better integration into their existing workflow and taking away the need to exit their PMS to send a referral.

“Sending a referral by fax or the My Aged Care website takes longer. By pre-populating patient information and GP details, e-Referrals are the fastest and most efficient way to refer patients to My Aged Care,” she says.

“Faxes are particularly slow to process, which results in patients waiting longer to be referred for an assessment.”

Warragul Family Medicine also saving precious time

Warragul Family Medicine in Victoria, which also uses Best Practice, has been using My Aged Care e-Referrals since December 2019.

Allied health assistant and medical receptionist Marcia Rollinson previously sent referrals via the My Aged Care website but can’t imagine going back to that method.

“I love e-Referrals. The pre-population of patient details is fantastic and saves me so much time. That’s the best part of the form because it’s so quick and easy.”

She also likes that e-Referrals are automatically saved back into the patient file.

“Previously, I’d have to save it externally somewhere or print it out and have a hard copy file, which was a hassle and not very secure.”

Being able to easily track back electronically to see when an e-Referral was sent is another bonus.

“Previously I’d have to try to think back or go through hard copy files, which was a pain.”

Marcia works two-and-a-half days a week and estimates she saves about an hour a week by using e-Referrals.

She also likes that she’s prompted if she misses a tab that needs to be filled out.

“For some reason there’s one tab I always forget to fill out, but it always prompts me at the end to fill it out, so nothing is missed.”


Did you know?

My Aged Care e-Referrals take about 5 minutes to complete and are processed instantly once successfully submitted. This makes e-Referrals the quickest and easiest way to refer patients to My Aged Care.

Here are our user guides which will help you get started:

Best Practice

Medical Director Clinical

Genie Solutions

Medtech Evolution (User guide coming soon)

For more information or technical support regarding the e-Referral forms, please contact the HealthLink help desk on 1800 125 036 (Option 4) or email helpdesk@healthlink.net

For more information about My Aged Care, please visit https://www.myagedcare.gov.au/health-professionals