Indian-origin professor faces inquiry over drawing double salaries in UK
London: A 44-year-old Indian-origin medical professor is facing a tribunal inquiry over his alleged failure to inform a UK university that he was still working at another varsity and continued to draw salaries from both the places for little over a year. Akhilesh Reddy admits being in full-time employment with the University of Cambridge and University College London (UCL) between September 2015 and November 2016, according to ‘The Times’. He is set to appear before the UK Medical Practitioners Tribunal Service this month to face charges that he failed to inform UCL that he remained employed by Cambridge and knowingly received full-time salaries from both institutions. The professor, who specialises in sleeping disorders, faces possible strike off action over the allegations, the newspaper reports. He made his mark in the study of sleep patterns when he discovered that red blood cells had their own body clocks and was hailed as a catch for UCL when he was appointed to the chair of experimental neurology. A spokesperson for UCL said they have “nothing to say until the hearing is over” and a spokesperson for Cambridge University refused to comment. The Medical Defence Union, which is representing him, has also declined to respond. Reddy now works for the University of Pennsylvania as a professor of pharmacology. The US university has refused to answer questions on whether it was aware of the allegations when he was appointed. (PTI)
Human touch in chat bots can backfire
New York: An Indian American researcher-led team has found that giving human touch to chat bots like Apple Siri or Amazon Alexa may actually disappoint users.
Just giving a chat bot human name or adding human-like features to its avatar might not be enough to win over a user if the device fails to maintain a conversational back-and-forth with that person, according to S. Shyam Sundar, Co-director of Media Effects Research Laboratory at Pennsylvania State University.
Because there is an expectation that people may be leery of interacting with a machine, developers typically add human names to their chat botsor programme a human-like avatar to appear when the chat bot responds to a user.
For the study, the researchers recruited 141 participants through Amazon Mechanical Turk, a crowd-sourced site that allows people to get paid to participate in studies.
Sundar said the findings could help developers improvising on the acceptance of chat technology among users. (IANS)