There seems to be an issue with the AI’s interpretation of the sleep disturbance value (according to the attached screenshot, this SHOULD be called AHI Apnoea Hypopnea Index… Is the AI giving wrong info here?).
The AI is telling me that as I have a Sleep Disturbance Value of 1, which is lower than the national average in my country, then it indicates possible issues.
I think that the logic here is reversed. The lower the Sleep Disturbance Value, the better. I use a CPAP machine, so I do expect the AHI to be lower than 5 (my target is lower than 2).
I mentioned my circadian rhythm disorder to the assistant and it gave incorrect treatment information.
Is there any way to limit its ability to provide poor medical advice?
Sure, it says to speak to your doctor, but that doesn’t justify poor information/advice. I was not given the option to correct it/inform it like you can with Chat GPT.
This feature won’t be useful for me, but I hope it helps others… But I mostly hope it doesn’t mislead anyone.
I tried this and used the “How did I sleep last night” question. That made me want to k iw what it would say about Sunday’s sleep. There is not an option to check any past nights sleep. Would also be nice to maybe see a comparison of 2 different sleep cycles.
Also, a tad off topic, but it did mention my snoring numbers, how does the app seperate snoring from continuous allergic rhinitis that can sometimes sound like snoring? And yes, that is a thing.
Hi @ShoalBear,
EDIT: sorry, I was mistaken and deceived by the Assistant’s answer…
Regarding allergic rhinitis - this sound will probably confuse the algorithms into thinking it might be snoring. To design a new sound class for rhinitis, we would need hundreds of thousands of samples of confirmed rhinitis sounds - so the algorithms can learn to recognize them.
@b_j in fact you can, we use a grounding approach, we first restrict the answer to only relate to the data we have provided, then we explain what are correct ranges and what is good / bad or neutral so we specify very carefully the whole playground and we only ask the AI to summarize it in human language and translate it to user’s langauge…
This approach greatly reduces any risk of hallucination of the AI, but of course theme may be issues which can be addressed by further restricting the scope. We can work iteratively here and fine-tune, but for that I would need to see what promt did we send to the model so that I can first reproduce it and then modify the promt so that the problem is fixed…