Marianne Test
A machine learning approach where a model can make predictions about classes or concepts it has never seen during training by leveraging semantic relationships or descriptive information. In medicine, zero-shot learning enables models to interpret rare diseases, novel symptoms, or emerging health threats without needing explicit examples. For instance, a model trained on general clinical data might recognize an unfamiliar condition described in text or flagged in clinical notes, supporting decision-making in scenarios where annotated datasets are limited or unavailable.
A visual representation showcasing the frequency of words in a given dataset. In medicine, word clouds are used to analyze and summarize large volumes of unstructured health data, such as patient feedback, medical literature, or clinical notes. They can help identify common symptoms in patient reports, highlight key topics in public health discussions, or provide insights into emerging trends in medical research.
A plot that combines a box plot and a kernel density plot, illustrating data distribution and its probability density. In medicine, violin plots are useful for analyzing patient data distributions, such as comparing biomarker levels across different patient groups, visualizing the spread of treatment responses, or assessing variations in hospital stay durations. They help researchers and clinicians interpret complex data patterns more effectively than traditional box plots.
A type of error in machine learning that occurs when the algorithm is sensitive to small changes in the data. A number of factors, such as the size of the training data or the complexity of the algorithm, can cause variance.
A dataset used to assess a machine learning model's performance and generalizability. Internal validation involves using a reserved portion of the original dataset not seen during training. External validation uses data from a separate source, population, or institution, offering a more rigorous test of the model's ability to generalize to new, real-world scenarios—critical in healthcare applications for ensuring reliability across settings.
A dataset used to assess a model's performance and generalizability after training. Internal validation uses a hold-out portion of the original dataset, while external validation uses entirely independent data from a different source or population. In healthcare, external validation is essential for evaluating how well a model performs across different settings, institutions, or patient groups.
Machine learning where the model finds patterns in data without labelled examples. Instead of being given explicit instructions, the model autonomously groups similar data points or detects hidden relationships through clustering, dimensionality reduction, or anomaly detection techniques. In medicine, unsupervised learning is used for patient segmentation, disease subtyping, anomaly detection in medical imaging and drug discovery.
The confidence that an AI system will perform reliably, ethically, and in alignment with user expectations and clinical standards. In healthcare, trust is critical for adoption and use; it is built through model transparency, rigorous validation, ethical oversight, and ongoing human oversight in clinical decision-making.
Refers to the degree to which the inner workings, decision-making processes, and limitations of an artificial intelligence system are understandable and accessible to users, developers, and other stakeholders. Transparency is important for ensuring that AI systems are fair and accountable.
A deep learning architecture designed for processing sequential data, particularly in natural language processing (NLP). Transformers use self-attention mechanisms to weigh the importance of different parts of input data, allowing them to capture long- range dependencies and contextual meaning more effectively than traditional models like recurrent neural networks (RNNs). In medicine, transformer models are widely used for clinical text analysis, medical chatbot development, and biomedical research. Examples include models like BERT and GPT.
A machine learning technique in which a model developed for one task is reused or fine-tuned for a different but related task. This approach significantly reduces the amount of labeled data and training time needed for new applications. In medicine, transfer learning is commonly used in medical imaging, where models pre-trained on large image datasets (e.g., ImageNet) are adapted to detect diseases in radiology, pathology, or dermatology images with improved efficiency and accuracy.
All resources provided are strictly owned and distributed by the creators indicated on the resource