ylai@lemmy.ml • 4 months agoStealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.plus-squareexternal-linkmessage-square0 fedilinkarrow-up18
arrow-up18external-linkStealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.plus-squareylai@lemmy.ml • 4 months agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 6 months agoAnyscale addresses critical vulnerability on Ray framework — but thousands were still exposedexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkAnyscale addresses critical vulnerability on Ray framework — but thousands were still exposedylai@lemmy.ml • 6 months agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 6 months agoAI hallucinates software packages and devs download them – even if potentially poisoned with malwareplus-squareexternal-linkmessage-square0 fedilinkarrow-up116
arrow-up116external-linkAI hallucinates software packages and devs download them – even if potentially poisoned with malwareplus-squareylai@lemmy.ml • 6 months agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 7 months agoWhy Are Large AI Models Being Red Teamed?plus-squareexternal-linkmessage-square0 fedilinkarrow-up14
arrow-up14external-linkWhy Are Large AI Models Being Red Teamed?plus-squareylai@lemmy.ml • 7 months agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 9 months agoHow 'sleeper agent' AI assistants can sabotage codeplus-squareexternal-linkmessage-square0 fedilinkarrow-up14
arrow-up14external-linkHow 'sleeper agent' AI assistants can sabotage codeplus-squareylai@lemmy.ml • 9 months agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 9 months agoNIST: If someone's trying to sell you some secure AI, it's snake oilplus-squareexternal-linkmessage-square1 fedilinkarrow-up114
arrow-up114external-linkNIST: If someone's trying to sell you some secure AI, it's snake oilplus-squareylai@lemmy.ml • 9 months agomessage-square1 Commentsfedilink
ylai@lemmy.ml • 10 months agoBoffins devise 'universal backdoor' for image models to cause AI hallucinationsplus-squareexternal-linkmessage-square0 fedilinkarrow-up12
arrow-up12external-linkBoffins devise 'universal backdoor' for image models to cause AI hallucinationsplus-squareylai@lemmy.ml • 10 months agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 1 year agoLLM Finetuning Risksplus-squareexternal-linkmessage-square0 fedilinkarrow-up12
arrow-up12external-linkLLM Finetuning Risksplus-squareylai@lemmy.ml • 1 year agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 1 year agoAre Local LLMs Useful in Incident Response? - SANS Internet Storm Centerexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkAre Local LLMs Useful in Incident Response? - SANS Internet Storm Centerylai@lemmy.ml • 1 year agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 1 year agoMicrosoft Bing Chat spotted pushing malware via bad adsexternal-linkmessage-square0 fedilinkarrow-up12
arrow-up12external-linkMicrosoft Bing Chat spotted pushing malware via bad adsylai@lemmy.ml • 1 year agomessage-square0 Commentsfedilink
ylai@lemmy.ml • 1 year agoNew AI Beats DeepMind’s AlphaGo Variants 97% Of The Time!plus-squareexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkNew AI Beats DeepMind’s AlphaGo Variants 97% Of The Time!plus-squareylai@lemmy.ml • 1 year agomessage-square0 Commentsfedilink
Capt. AIn@infosec.pubM • 1 year agoIdentifying AI-generated images with SynthIDexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkIdentifying AI-generated images with SynthIDCapt. AIn@infosec.pubM • 1 year agomessage-square0 Commentsfedilink
Capt. AIn@infosec.pubM • 1 year agoThinking about the security of AI systemsplus-squareexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkThinking about the security of AI systemsplus-squareCapt. AIn@infosec.pubM • 1 year agomessage-square0 Commentsfedilink
Capt. AIn@infosec.pubM • 1 year agoGitHub - google/model-transparencyplus-squareexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkGitHub - google/model-transparencyplus-squareCapt. AIn@infosec.pubM • 1 year agomessage-square0 Commentsfedilink
kristoff@infosec.pub • 1 year agodisinformation videos on AI ?plus-squaremessage-squaremessage-square0 fedilinkarrow-up11
arrow-up11message-squaredisinformation videos on AI ?plus-squarekristoff@infosec.pub • 1 year agomessage-square0 Commentsfedilink
Capt. AIn@infosec.pubM • 1 year agoUniversal and Transferable Attacks on Aligned Language Modelsplus-squareexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkUniversal and Transferable Attacks on Aligned Language Modelsplus-squareCapt. AIn@infosec.pubM • 1 year agomessage-square0 Commentsfedilink
netrom@infosec.pub • 1 year agoOWASP Top 10 for LLMs (v1.0)plus-squareexternal-linkmessage-square0 fedilinkarrow-up12
arrow-up12external-linkOWASP Top 10 for LLMs (v1.0)plus-squarenetrom@infosec.pub • 1 year agomessage-square0 Commentsfedilink
Capt. AIn@infosec.pubM • 1 year agoCybercriminals train AI chatbots for phishing, malware attacksplus-squareexternal-linkmessage-square0 fedilinkarrow-up11
arrow-up11external-linkCybercriminals train AI chatbots for phishing, malware attacksplus-squareCapt. AIn@infosec.pubM • 1 year agomessage-square0 Commentsfedilink
stevedidwhat_infosec@infosec.pub • 1 year agoGPT Malware Creationplus-squaremessage-squaremessage-square0 fedilinkarrow-up11
arrow-up11message-squareGPT Malware Creationplus-squarestevedidwhat_infosec@infosec.pub • 1 year agomessage-square0 Commentsfedilink
Capt. AIn@infosec.pubM • 1 year agoAdversarial suffixes that circumvent the alignment of open source LLMS, ChatGPT, Claude, Bard, and LLaMA-2plus-squareexternal-linkmessage-square0 fedilinkarrow-up12
arrow-up12external-linkAdversarial suffixes that circumvent the alignment of open source LLMS, ChatGPT, Claude, Bard, and LLaMA-2plus-squareCapt. AIn@infosec.pubM • 1 year agomessage-square0 Commentsfedilink