![]() Now that you’ve got your security AI strategy in place for your company, it’s time to focus on your own personal strategy. As Jason Ross, lead enterprise security engineer at Salesforce, says, “This can be a tedious process, but it’s critical to the security of the org.” Admin skills That means figuring out which levels of data require protection and what AI will have access to. To set your company up for success with AI, now’s the time to clean and classify your data. ![]() You can learn more about the latest AI innovations at Salesforce in this great keynote from London World Tour. When you’re building solutions with Sales, Service, Marketing, Commerce, Slack, or Tableau, the trust layer ensures your data is secure by protecting personally identifiable information (PII) and creating guardrails to prohibit things like publishing code directly to production. Here’s the good news: Salesforce has a built-in trust layer for our AI products. There’s no guarantee that the code generated will be up to industry security standards, so you need to treat anything generated with external AI tools as untrusted. Same goes for using AI tools to generate code. ![]() ![]() So if someone uses a generative AI tool to create data and then adds that into Salesforce, it could compromise your data quality. The next thing to think about is people using AI to input data into Salesforce or to create code for building Salesforce. It might read something like, “Don’t put any data from our Salesforce system into an external AI tool.” So right now, I recommend partnering with your legal department to create a company policy so your users know how to protect company data. If a user shares potentially sensitive corporate or customer information with a large language model (LLM), that means an attacker or competitor could access that information through prompts. The first thing all Salesforce Admins should do is develop a policy about users taking data from Salesforce and putting it into an external generative tool or database. I think of AI and security for Salesforce Admins (and all practitioners) in four buckets: user access and usage, Salesforce products, admin skills, and enablement. At Salesforce, our Ethics, Legal, and Security teams work closely together to shape our AI strategy, which is why we develop our products using these trusted AI principles-responsible, accountable, transparent, empowering, and inclusive-which might be helpful as you put together your own strategy. There are ways in which AI can be used to improve security with prevention and automation, and ways in which AI can introduce net new security risks like producing inaccurate results and increasing the complexity of cybersecurity attacks. The most successful Salesforce Admins are ones that think ahead and plan for new features, so here’s my attempt to help you get ahead of these AI innovations and set your company and yourself up for success. Security is one of the top (if not most) important responsibilities of a Salesforce Admin, so it’s critical that you start thinking now about how to prepare for new security challenges with AI. ![]() As a Salesforce Admin, you’re probably asking a lot of questions about how these new AI products will change your Salesforce strategy, especially when it comes to security. Salesforce has made a ton of AI announcements with Sales GPT, Service GPT, Slack GPT, and beyond. From having fun with generative imaging to staring in wonder at driverless cars, it seems that AI is popping up all over the place. Artificial intelligence (AI) is everywhere right now and everyone is talking about it. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |