Liner has also started gaining love from the market and customers as an AI search product, and various policy violations have begun to occur. An increase in inappropriate requests as well as abuse cases has been observed, which are areas that require internally set policies and technology application to prevent or guide their correct usage directions.
Currently, Liner has configured a guardrail logic tailored to its Usage Policy by utilizing Prompt Guard, the OpenAI Moderation API, and Llama Guard 3. This enables Liner to design business logic that aligns with safety policies and addresses cost issues arising from abuse.
The Liner team will continue its efforts in security certification, moderation modeling, and policy setting and application to the product, transcending credible AI to achieve reliable AI.
How can you help us? Give us feedback!
Comments
0 comments
Article is closed for comments.