<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://hdl.handle.net/10453/148704">
    <title>OPUS Collection:</title>
    <link>http://hdl.handle.net/10453/148704</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194804" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194803" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194797" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194716" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-26T14:13:33Z</dc:date>
  </channel>
  <item rdf:about="http://hdl.handle.net/10453/194804">
    <title>Robust quantification of spectral transitions in perturbed quantum systems</title>
    <link>http://hdl.handle.net/10453/194804</link>
    <description>Title: Robust quantification of spectral transitions in perturbed quantum systems
Authors: Szabo, Z; Gehr, S; Facchi, P; Yuasa, K; Burgarth, D; Lonigro, D
Abstract: A quantum system subject to an external perturbation can experience leakage between uncoupled regions of its energy spectrum separated by a gap. To quantify this phenomenon, we present two complementary results. First, we establish time-independent bounds on the distances between the true dynamics and the dynamics generated by block-diagonal effective evolutions constructed via the Schrieffer-Wolff and Bloch methods. Second, we prove that, under the right conditions, this leakage remains small eternally. That is, we derive a time-independent bound on the leakage itself, expressed in terms of the spectral gap of the unperturbed Hamiltonian and the norm of the perturbation, ensuring its validity for arbitrarily large times. Our approach only requires a finite spectral gap, thus accommodating continuous and unbounded spectra. Finally, we apply our bounds to specific systems of practical interest.</description>
    <dc:date>2025-09-02T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194803">
    <title>BiMark: Unbiased Multilayer Watermarking for Large Language Models</title>
    <link>http://hdl.handle.net/10453/194803</link>
    <description>Title: BiMark: Unbiased Multilayer Watermarking for Large Language Models
Authors: Feng, X; Zhang, H; Zhang, Y; Zhang, LY; Pan, S
Abstract: Recent advances in Large Language Models (LLMs) have raised urgent concerns about LLMgenerated text authenticity, prompting regulatory demands for reliable identification mechanisms. Although watermarking offers a promising solution, existing approaches struggle to simultaneously achieve three critical requirements: text quality preservation, model-agnostic detection, and message embedding capacity, which are crucial for practical implementation. To achieve these goals, the key challenge lies in balancing the trade-off between text quality preservation and message embedding capacity. To address this challenge, we propose BiMark, a novel watermarking framework that achieves these requirements through three key innovations: (1) a bitflip unbiased reweighting mechanism enabling model-agnostic detection, (2) a multilayer architecture enhancing detectability without compromising generation quality, and (3) an information encoding approach supporting multi-bit watermarking. Through theoretical analysis and extensive experiments, we validate that, compared to state-of-the-art multi-bit watermarking methods, BiMark achieves up to 30% higher extraction rates for short texts while maintaining text quality indicated by lower perplexity, and performs comparably to non-watermarked text on downstream tasks such as summarization and translation.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194797">
    <title>A Comprehensive Overview of Large Language Models</title>
    <link>http://hdl.handle.net/10453/194797</link>
    <description>Title: A Comprehensive Overview of Large Language Models
Authors: Naveed, H; Khan, AU; Qiu, S; Saqib, M; Anwar, S; Usman, M; Akhtar, N; Barnes, N; Mian, A
Abstract: Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond. This success of LLMs has led to a large influx of research contributions in this direction. These works encompass diverse topics such as architectural innovations, better training strategies, context length improvements, fine-tuning, multimodal LLMs, robotics, datasets, benchmarking, efficiency, and more. With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction. Considering the rapidly emerging plethora of literature on LLMs, it is imperative that the research community is able to benefit from a concise yet comprehensive overview of the recent developments in this field. This article provides an overview of the literature on a broad range of LLM-related concepts. Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of research in LLMs. This review article is intended to provide not only a systematic survey but also a quick, comprehensive reference for the researchers and practitioners to draw insights from extensive, informative summaries of the existing works to advance the LLM research.</description>
    <dc:date>2025-08-19T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194716">
    <title>Half a Century of Fixed Point Theory Research in Thailand: A Bibliometric Analysis</title>
    <link>http://hdl.handle.net/10453/194716</link>
    <description>Title: Half a Century of Fixed Point Theory Research in Thailand: A Bibliometric Analysis
Authors: Saqlain, M; Merigo, JM; Kumam, P; Salisu, S</description>
    <dc:date>2025-09-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

