Part 1: Platform Terminology
Authenticx Term | Definition |
Human Validated | Interactions between customers and call agents listened to by Authenticx Insights Analysts. |
Topic Prevalence | A percentage of how often that specific topic (i.e. Copay, BV...) is mentioned in calls. |
Inbound Calls | The number of calls going into the Hub (call center). |
Outbound Calls | Outbound Calls The number of calls coming out of the Hub (call center). |
Quality Scores | The evaluation of the call agent based on client specific quality skills. These are customizable but typically include standard skills such as Takes Accountability, Manages Expectations, and Proactively Listens. |
Resolution Rate | Fully Resolved or clear resolution offered by the end of the call. |
Classifier | A set of rules or criteria that flag as True or False on a conversation. If you happen to see the acronym “RBC,” that's our outdated terminology for this term and it stands for Rule Based Classifiers. |
Preventable call time | The amount of time added to calls due to avoidable Eddy effects taking place. |
Escalation Volume | The number of calls that “escalate” to someone in a different department or in a Managerial role. |
Interaction Date | Date the interaction took place between the caller and the agent. |
Arrival Date | Date the interaction arrived in Authenticx. |
Evaluation Start Date | Date the evaluation started. |
Evaluation Complete Date | Date the evaluation was completed. You may also see this listed as "Submitted Date.” |
Conversation | A single interaction that is brought into the Authenticx platform. This could be an audio call, an email conversation, a chat conversation, or even a text conversation. |
Analyst | An individual who performs Evaluations in the Authenticx platform, answering various module questions related to an interaction. An analyst can be a team member of the client organization, an Authenticx team member, or even the Authenticx AI. |
Evaluations | Forms of data, attached to an interaction, that are completed by a human analyst, a Machine Learning tool, or a combination of both. Evaluations consist of modules where questions about the interaction are answered. |
Module(s) | Individual forms made up of questions that are to be answered by a human analyst, machine learning, or metadata. One or several modules may make up an evaluation form. |
Audio Clip | An analyst-created segment of audio from an audio-based interaction. |
Metadata | Data about data, or associated information about the content of a file. In the world of conversational intelligence, metadata is usually the fields that tell a user the who, where, when of the interaction. It could be agent name, interaction date, a team identifier, or even social identifiers of the customer. |
Evaluation (Eval) Number | Unique number given to every evaluation in Authenticx. |
Authenticx Hierarchy | A structure that provides organization to interaction and evaluation data for a tenant. |
Agents | They are members of a customer organization that have conversations with the patients, members, and/or customers of that organization. In Authenticx, agents are associated with an interaction. |
Users | Created individuals who access the Authenticx platform (either through a traditional username/password or via SSO and their company credentials). Their platform access is controlled through their given role. |
Agent Users | A special type of user who is connected to their agent record and whose access is restricted to only show the agents domain and their own results. |
Sampling | The act of using specific criteria (rulesets) to identify a subset (sample) of interactions from the total calls the platform has received and evaluating them (by attaching an evaluation to an interaction). |
The Sample Set | The subset of interactions identified based upon the ruleset utilized (ex: call length, call direction, classifier = True/False, etc.). |
Ruleset | The defined criteria that is being applied to the total calls the platform has received in order to identify the sample. |
Machine Learning Model Identification | A type of classifier created from Machine Learning. They predict the presence/absence of a topic in a single interaction, based on a complex algorithm with many variables that has been developed through pattern- recognition over thousands, and thousands of tagged interactions. Machine Learning Models are best at identifying broad topics that cut across industries and have several thousand points of data from which to construct it's algorithm. |
SmartPredict | Answering a module question with a Classifier, Machine Learning Model, or piece of metadata. The question uses these forms of Artificial Intelligence to predict what the correct answer is. SmartPredictTM can be applied to all evaluations. |
Agreement Score | This is our attempt to interpret (what is often an unintentional) shorthand reference to a classifier's "accuracy" and tie it down closer to the real measurement: agreement between the human analyst and the AI judgement. Derived by: (total # of answers - total # of AI-driven answers that were changed by a human)/ total # of answers. |
SFTP (SSH File Transfer Protocol) | A secure file transfer method. Authenticx sets up a server between themselves and the organization to receive files and data. This is like a locker where Authenticx and the organization are the only ones who have the combination. |
API (Application Programming Interface) | A direct connection to an organization's telephony solution such as NICE CXONE or Five9. Organizations should connect with an Authenticx team member to discuss connection details and feasibility. |
User Management | Helps ensure users have access to all their expected data and provides security (in the form of role provisioning and hierarchy setup). The primary methodology for this is through User roles and Hierarchy Assignments. |
User Roles | Grant functionality to areas and parts of Authenticx and inform what users can do within those areas (example: analyst who can access montage library and MyEvaluations, but not Reports). |
Hierarchy Assignments | Determine a user's scope of view to information contained within evaluations and interactions (example: Director Alpha can see call center 1 and 2, but Manager Bravo can only see call center 1). Scope can be constrained by Hierarchy member, view/edit/manage capabilities, and/or interaction media type. |
Extended Metadata | Contextual data that is attached to an interaction file with the source data being an external system that goes beyond just the standard interaction metadata. |
Automated Quality Management | Reports created by Healthcare Specific AI models to automatically identify and flag instances of HIPAA, compliance issues, product quality complaints, adverse events, and more. Additionally, Report on areas of weakness and strength across all agents teams, brands, call lines, etc. to fine tune agent training. Furthermore, it empowers managers with automated call evaluations so they can spend their time on 1:1 high touch coaching to reduce churn and improve customer outcomes. |
Part 2: AI Model Types
AI | Definition + Application |
Generative AI (GenAI) | Definition: AI that produces new content based on input data, generating text, images, or other media by learning patterns from large datasets. Application: Used for creating original text, images, audio, or video by learning from and reacting to existing data patterns. |
Large Language Models (LLM) | Definition: AI models trained on vast datasets to understand and generate human-like text by predicting the next word or phrase based on context. Application: Ideal for tasks involving text comprehension, summarization, translation, question answering, and content generation. |
Natural Language Understanding (NLU) | Definition: A branch of AI that focuses on interpreting the meaning, intent, and context behind text or speech, beyond simple pattern matching. |
Deep Learning (DL) | Definition: This model type trains on layered datasets to develop "neural pathway"-like capabilities, allowing it to analyze and understand content similarly to a human analyst. Application: Effective for complex data analysis, especially when interpreting intricate patterns or making detailed predictions. |
Machine Learning (ML) | Definition: A type of AI where models are trained on large datasets to make predictions or decisions without being explicitly programmed, improving over time as they are exposed to more data. |
Part 3: Authenticx AI Models
Authenticx Model | AI Type | Description |
Redaction | Deep Learning | Automatically remove select personally identifiable information from conversations. |
Eddy Effect Signals | Deep Learning | Surfaces when a desired or expected customer experience is disrupted by an obstacle, along with identifying where in the interaction the friction point was experienced. |
Safety Event Identification | Deep Learning | A family of models that flags Safety Events (product quality complaints, adverse events, and special situations), along with identifying where in the interaction the Safety Event was flagged. |
Safety Event Acknowledgement | Deep Learning | Determines if the agent talking with the patient appropriately addressed the Safety Event, indicating they will report it properly. |
HIPAA Compliance | Deep Learning | Flags whether the agent confirmed the customer’s name and 2 pieces of PII before disclosing HIPAA sensitive information. |
Conversation Topics | LLM/GenAI | Surfaces 1-3 topics of conversation for every interaction with a customer, including a description of the topic. |
Conversation Summary | LLM/GenAI | Generates a 3-4 sentence summary of every conversation through any channel with 95%+ accuracy, trained for healthcare. |
Agent Coaching Notes | LLM/GenAI | Generates coaching notes for an agent on a given conversation and surfaces recommendations for improvement. |
Starting & Ending Sentiment | Deep Learning | Labels the starting and ending sentiments of a conversation as Positive, Neutral, or Negative, by leveraging Natural Language Understanding and Data Labeling. |
Contact Type | Deep Learning | Automatically identifies the primary persona present on every interaction as one of the following 6 contact types: Caregiver, Patient/Member, HCP, Payer, Internal, or Pharmacy. |
Voicemail | Deep Learning | Detects interactions that are solely voicemail responses, optionally filtering these conversations out to focus on high value calls. |
IVR | Deep Learning | Detects interactions that are solely interactive voice response (automated phone system tool used by call centers), and filters these conversations out. |
Part 4: Additional AI Terminology
Term | Definition |
Data Labeling | The process of providing structure to data by creating labels that models can learn to apply, enabling the identification of specific information within the data. |
Drift | Refers to an unacceptable level of inaccuracy in the results provided by a model when compared to human scoring. Addressing drift is crucial for maintaining model reliability. |
Inter-Rater Reliability (IRR) | A measure of consistency among human analysts' responses, indicating how much discrepancy exists between their evaluations. High IRR is required to build an effective AI model that can reliably match human judgement. |
Confusion Matrix | A tool used to determine a model's accuracy by comparing its outputs to human responses, allowing for a clear understanding of where the model performs well and where improvements are needed. |