Improving End-to-End Models for Form Understanding with Synthetic Ground Truth Pairs
Methods for improving end-to-end document understanding models by leveraging synthetic ground truth pairs of commonly filled form data.
Our published research. We focus on reproducible work with open code and data.
Methods for improving end-to-end document understanding models by leveraging synthetic ground truth pairs of commonly filled form data.
Benchmarking LLMs (BERT, RoBERTa, GPT-2, GPT-Neo) for security log analysis. Introduces LLM4Sec pipeline achieving 0.998 F1-Score.
Exploring LLM embeddings for distinguishing behaviors in log files via unsupervised learning for anomaly detection.
Comparing semantic and syntactic feature extraction approaches for unsupervised anomaly detection in application logs.
Analyzing lightweight syntactical feature extraction techniques from information retrieval for log abstraction in security.
Novel entropy-based metric to quantify model-dataset-complexity relationships and analyze environmental impact of CV methods.
Addressing climate change impact from the Computer Vision community and proposing methods to limit environmental footprint.
Neural architecture optimization focusing on channel configurations for improved efficiency in convolutional networks.
Interested in research collaboration? We're always looking to work with researchers who share our commitment to open science.
Get In Touch