Skip to content

Atlantic Canada Chapter event (6:30pm AT / 5:30pm ET) [Virtual]

Atlantic Canada Chapter event (6:30pm AT / 5:30pm ET) [Virtual]

Thursday, February 20, 2025 (6:30 PM - 8:00 PM) (AST)

Description

Accept emails from <no-reply@zoom.us> to get your personalized registration link. 

Topic: Leveraging GenAI Techniques for Advanced Threat Detection

Presenter name: Dr. Jennifer Vlasiu

Presenter Bio: Jennifer, a recent Doctor of Engineering in Cybersecurity Analytics graduate from George Washington University in Washington, DC, is a recognized expert in artificial intelligence and machine learning. Her contributions have been instrumental in the development and implementation of sophisticated AI models that we benefit from today.

Jennifer’s approach to cybersecurity centers around effective task mitigation and threat intelligence. With a focus on attribution and intent, she navigates cybersecurity challenges, distinguishing between unintended consequences and malicious actions by bad actors. Her expertise in large language models has significantly influenced the landscape of AI.

Beyond her professional accomplishments, Jennifer is committed to making cybersecurity accessible and less intimidating for everyone. As an advocate for women in the field, she hopes to inspire others to pursue fulfilling careers in cybersecurity while encouraging foundational cybersecurity literacy for all individuals.” – IT World Canada’s Top 20 Women in Cybersecurity 

Presentation Description: In this session we will explore the transformative capabilities of Generative AI (GenAI), and how the open source pipelines of Hugging Face can be utilized to solve modern cybersecurity challenges. To illustrate how GenAI pipelines and techniques can pragmatically be used to detect advanced threats of malware that have evaded the moderating systems of some of the largest tech behemoths, we will walk through a case study based on the OpenAI Storefront.  Launched a year ago in January 2024, the OpenAI GPT Storefront, can be seen as akin to the Apple Store, hosting a series of custom versions of ChatGPT.  Globally, there are currently over 3 million commercial GPTs (or “applications”) for users to interact with. Invariably, as the popularity, proliferation, and adoption of large language model (LLM) enabled web applications continues, so too does this attack vector space, leaving users vulnerable to malware, spam, and phishing attacks.

We will use a series of recently exfiltrated data, paired with web crawled data, on which to illustrate a series of investigative GenAI techniques that can be employed to detect cybersecurity threats in LLM web-enabled applications. In doing so, we hope to showcase how malicious actors can exploit the structural vulnerabilities of GPTs, the risks associated with indirect prompt injection attacks, and how this new attack vector space is being likened as the “AI spin” on the age-old application security problem of malicious input and data leakage. We will conclude by introducing a novel technique on how to quantify the economical, legal, and societal quagmires that arise with the use and deployment of these commercial applications, in the hopes of aiding businesses with their return of investment/prioritization/resource allocations in tackling LLM security challenges.

Bring your favorite IDE, a python notebook to tinker in, and let’s go threat hunting!

Virtual
Event Contact
Myron Hedderson
Send Email
Thursday, February 20, 2025 (6:30 PM - 8:00 PM) (AST)

06:30 PM - 08:00 PM (AT)
Starts at: [5:30pm ET, 2:30pm PT]

Registered Guests
24
Powered By GrowthZone