We tested four of the top Chinese LLMs - Tongyi Qianwen 通义千问, Baichuan 百川大模型, DeepSeek 深度求索, and Yi 零一万物 - to assess their ability to reply open-ended questions on politics, regulation, and history. Unlike models from OpenAI and Google, which require huge computational resources, DeepSeek was skilled using considerably fewer GPUs - elevating questions on whether or not huge hardware investments are obligatory to achieve high-efficiency AI. OpenAI has had its own privacy points -- in 2023 it admitted to leaking consumer's chat histories -- and it's not clear how shortly the company will fold when legislation enforcement asks for details about how a suspect makes use of ChatGPT. DeepSeek startled everyone last month with the claim that its AI model makes use of roughly one-tenth the amount of computing power as Meta’s Llama 3.1 mannequin, upending an entire worldview of how much energy and sources it’ll take to develop artificial intelligence. The chopping-edge tasks I dreamed of engaged on demanded computational energy that was far beyond my budget