<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://hdl.handle.net/10453/148704">
    <title>OPUS Collection:</title>
    <link>http://hdl.handle.net/10453/148704</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194998" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194937" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194933" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194910" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-17T04:21:41Z</dc:date>
  </channel>
  <item rdf:about="http://hdl.handle.net/10453/194998">
    <title>Enhanced continuous-variable quantum key distribution protocol via adaptive signal processing</title>
    <link>http://hdl.handle.net/10453/194998</link>
    <description>Title: Enhanced continuous-variable quantum key distribution protocol via adaptive signal processing
Authors: Erkılıç, Ö; Shajilal, B; Conlon, LO; Walsh, A; Das, A; Kish, S; Symul, T; Lam, PK; Assad, SM; Zhao, J
Abstract: Quantum key distribution (QKD) provides secure communication using quantum mechanics, with continuous-variable QKD (CV-QKD) being an attractive solution due to its compatibility with existing telecommunication technology. Its main drawback is susceptibility to signal loss in fibres and free-space links, including satellites, which limits performance. Here we show a software-based protocol enhancing CV-QKD by applying adaptive filters at the transmitter and receiver, allowing the system to dynamically respond to changing channel conditions. Our security analysis avoids relying on Gaussian extremality, giving accurate bounds on an eavesdropper’s information. The protocol can also extract keys in regions that would normally be considered insecure. We demonstrate a threefold increase in secret-key rates compared with the best existing CV-QKD protocol, and in satellite simulations, up to a 400-fold improvement. Because it requires no hardware modifications, our method can be readily integrated into existing systems, paving the way for more practical and robust quantum communication networks.</description>
    <dc:date>2025-12-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194937">
    <title>Holevo Cramér-Rao bound: How close can we get without entangling measurements?</title>
    <link>http://hdl.handle.net/10453/194937</link>
    <description>Title: Holevo Cramér-Rao bound: How close can we get without entangling measurements?
Authors: Das, A; Conlon, LO; Suzuki, J; Yung, SK; Lam, PK; Assad, SM
Abstract: In multi-parameter quantum metrology, the resource of entanglement can lead to an increase in efficiency of the estimation process. Entanglement can be used in the state preparation stage, or the measurement stage, or both, to harness this advantage—here we focus on the role of entangling measurements. Specifically, entangling or collective measurements over multiple identical copies of a probe state are known to be superior to measuring each probe individually, but the extent of this improvement is an open problem. It is also known that such entangling measurements, though resource-intensive, are required to attain the ultimate limits in multi-parameter quantum metrology and quantum information processing tasks. In this work we investigate the maximum precision improvement that collective quantum measurements can offer over individual measurements, calling this the ‘collective quantum enhancement’. We show that, whereas the maximum enhancement can, in principle, be a factor of n for estimating n parameters, this bound is not tight for large n. Instead, our results prove an enhancement linear in dimension of the qudit probe is possible using collective measurements and lead us to conjecture that this is the maximum collective quantum enhancement in any local estimation scenario.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194933">
    <title>Half a century of Instructional Science: a bibliometric analysis</title>
    <link>http://hdl.handle.net/10453/194933</link>
    <description>Title: Half a century of Instructional Science: a bibliometric analysis
Authors: Ruiz-Morales, B; Alfaro-García, VG; Merigó, JM; Atif, A; Kyza, EA</description>
    <dc:date>2026-12-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194910">
    <title>VCP-CLIP: A Visual Context Prompting Model for Zero-Shot Anomaly Segmentation</title>
    <link>http://hdl.handle.net/10453/194910</link>
    <description>Title: VCP-CLIP: A Visual Context Prompting Model for Zero-Shot Anomaly Segmentation
Authors: Qu, Z; Tao, X; Prasad, M; Shen, F; Zhang, Z; Gong, X; Ding, G
Editors: Leonardis, A; Ricci, E; Roth, S; Russakovsky, O; Sattler, T; Varol, G
Abstract: Recently, large-scale vision-language models such as CLIP have demonstrated immense potential in zero-shot anomaly segmentation (ZSAS) task, utilizing a unified model to directly detect anomalies on any unseen product with painstakingly crafted text prompts. However, existing methods often assume that the product category to be inspected is known, thus setting product-specific text prompts, which is difficult to achieve in the data privacy scenarios. Moreover, even the same type of product exhibits significant differences due to specific components and variations in the production process, posing significant challenges to the design of text prompts. In this end, we propose a visual context prompting model (VCP-CLIP) for ZSAS task based on CLIP. The insight behind VCP-CLIP is to employ visual context prompting to activate CLIP’s anomalous semantic perception ability. In specific, we first design a Pre-VCP module to embed global visual information into the text prompt, thus eliminating the necessity for product-specific prompts. Then, we propose a novel Post-VCP module, that adjusts the text embeddings utilizing the fine-grained features of the images. In extensive experiments conducted on 10 real-world industrial anomaly segmentation datasets, VCP-CLIP achieved state-of-the-art performance in ZSAS task. The code is available at https://github.com/xiaozhen228/VCP-CLIP.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

