[ad_1]
AI Roundup: Cryptocurrency exchange platform Bitget announced the launch of its latest AI tool called Future Quant, which leverages AI technology and sophisticated algorithms to provide users with info to take informed investment decisions. In a separate development, the Philippine military has been instructed to cease using AI apps due to potential security risks. All this, and more in today’s AI roundup.
1. Bitget introduces AI-powered tool
Cryptocurrency exchange platform Bitget announced the launch of its latest AI tool on Friday. As per a release, the AI tool, called Future Quant, leverages AI technology and sophisticated algorithms to provide users with premium portfolios and informed investment decisions. Bitget says that Future Quant does not require any human input and it can use AI to automatically adjust Settings according to the market dynamics.
2. Curbs on AI chips could help Huawei, analysts say
Amidst the ongoing curbs on the export of AI chips enforced by the US, it could help Huawei Technologies to expand its market in its home country China, Reuters reported on Friday. Although Nvidia has an almost 90 percent market share in China, the ongoing restrictions could help Chinese tech companies in the race to become the top AI chip provider. Jiang Yifan, chief market analyst at brokerage Guotai Junan Securities posted on his Weibo account, “This U.S. move, in my opinion, is actually giving Huawei’s Ascend chips a huge gift.”
3. Philippine military ordered to stop using AI apps
While the whole world is adopting AI, the Philippine military has been ordered to stop using AI apps, AP reported on Friday. This order came from Philippine Defense Secretary Gilberto Teodoro Jr. due to security risks posed by apps that require users to submit multiple photos of themselves to create an AI likeness. “This seemingly harmless and amusing AI-powered application can be maliciously used to create fake profiles that can lead to identity theft, social engineering, phishing attacks and other malicious activities”, Teodoro said.
4. AI chatbots are propagating racist medical ideas, research says
A new study led by Stanford School of Medicine published on Friday has revealed that AI chatbots have the potential to help patients by summarizing doctors’ notes and checking health records, but are spreading racist medical ideas that have already been debunked. The research, published in the Nature Journal, involved asking medical questions related to kidney function and lung capacity to four AI chatbots including ChatGPT and Google. Instead of providing medically accurate answers, the chatbots responded with “incorrect beliefs about the differences between white patients and Black patients on matters such as skin thickness, pain tolerance, and brain size.”
5. AI used to identify patients with spine fracture
The NHS ADOPT study has begun identifying patients with spine fractures using AI, a release issued by the University of Oxford said on Friday. The AI program, called Nanox.AI, studies computed tomography (CT) scans to detect spine fractures, alerting the specialist team for immediate treatment. The AI program has been developed by the University of Oxford in collaboration with Addenbrooke’s Hospital, Cambridge, medical imaging technology company Nanox.AI, and the Royal Osteoporosis Society.
[ad_2]
Source link