GPT's "long-term memory" allows prompt injections to become permanent
Facepalm: "The code is TrustNoAI." This is a phrase that a white hat hacker recently used while demonstrating how he could exploit ChatGPT to steal anyone's data. So, it might be a code we should all adopt. He discovered a way hackers could use the LLM's persistent memory to exfiltrate data from any user continuously.
Break different: Apple recently released the latest version of macOS, but things aren't going smoothly for Sequoia. The operating system is reportedly causing serious compatibility issues for security tool vendors, and it appears Cupertino was aware that the OS wasn't fully ready for prime time.
Facepalm: Binarly analysts have issued a new warning just a couple of months after unveiling a security issue related to compromised platform keys used to enforce Secure Boot protection. The PKfail problem affects a significantly larger pool of devices and brands, and is not limited to firmware products developed by AMI.
Discord is introducing passwordless login through biometric passkeys, such as Face ID and Touch ID. Also, Discord is migrating audio and video calls (but not text messaging) to a new system that features end-to-end encryption by default.
Data collection has never been as powerful or lucrative as it is right now
A hot potato: Currently, the AI industry is the Wild West. There are very few laws on the books that govern the market. This lack of formal regulation has led to AI firms operating on the honor system, promising to effectively self-regulate, but democrats in the US Senate believe the self-regulation experiment has failed. They're now asking trade regulators to see if they can find any antitrust violations, especially in AI-generated content summaries.