<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://hdl.handle.net/10453/35217">
    <title>OPUS Collection:</title>
    <link>http://hdl.handle.net/10453/35217</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194601" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194590" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194568" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194545" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-10T12:31:50Z</dc:date>
  </channel>
  <item rdf:about="http://hdl.handle.net/10453/194601">
    <title>Towards Accurate Inventory Verification in FTTP Networks: GNN-Based Framework for Physical-Logical Alignment</title>
    <link>http://hdl.handle.net/10453/194601</link>
    <description>Title: Towards Accurate Inventory Verification in FTTP Networks: GNN-Based Framework for Physical-Logical Alignment
Authors: Altaf, T; Liang, Y; Owen, R; Abolhasan, M; Liu, RP
Abstract: Accurate Physical Network Inventory (PNI), including fibre cables and passive components such as GPON splitters, is essential for efficient operation and fault management in Fibre-to-the-Premises (FTTP) networks. In practice, discrepancies between inventory records and deployed infrastructure frequently arise due to manual updates, delayed synchronization, and construction practices. This paper proposes a graph-based framework for detecting cable-level anomalies by integrating telemetry-derived optical distance measurements with PNI data. The network is modeled as a graph in which edges represent fibre segments annotated with physical and optical distances, and nodes represent passive network elements. A GNN model is trained for edge-level anomaly classification, enabling identification of physical–logical inconsistencies in the inventory. Experimental results demonstrate effective anomaly detection, highlighting the potential of the proposed approach as a scalable and extensible solution for automated inventory validation in GPON-based FTTP networks.</description>
    <dc:date>2026-02-05T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194590">
    <title>Global, regional and national burden of ischemic heart disease attributable to suboptimal diet, 1990–2023: a Global Burden of Disease study</title>
    <link>http://hdl.handle.net/10453/194590</link>
    <description>Title: Global, regional and national burden of ischemic heart disease attributable to suboptimal diet, 1990–2023: a Global Burden of Disease study
Authors: GBD 2023 IHD &amp; Dietary Risk Factors Collaborators,; Sun, J
Abstract: Ischemic heart disease (IHD) remains a leading cause of death worldwide, with dietary risks being its most significant modifiable factor. Here, using the Global Burden of Diseases, Injuries and Risk Factors Study 2023, we
estimated the mortality and disability-adjusted life years from diet-related IHD across 204 countries. In 2023, a suboptimal diet was responsible for 4.06 million (95% uncertainty interval (UI) 0.74–6.22) IHD deaths
and 96.84 million (18.82–142.52) IHD disability-adjusted life years. The global age-standardized death rate of IHD attributable to suboptimal diet decreased by 43.92% (95% UI 34.44–53.23) per 100,000 population from
1990 to 2023. Among dietary factors, low intake of nuts and seeds (9.87, 95% UI 2.84–17.12 deaths per 100,000 population), low whole grains (9.22, 4.73–13.67), low fruits (7.25, 1.54–13.34) and high sodium (7.15, 0.92–17.97)
were primary contributors to IHD deaths. The burden was particularly pronounced in low- and middle-sociodemographic index countries. By disentangling dietary risk factors, we identified the portion of IHD burden directly modifiable through food interventions.</description>
    <dc:date>2026-03-30T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194568">
    <title>Exploring the Feature Extraction and Relation Modeling For Light Weight Transformer Tracking</title>
    <link>http://hdl.handle.net/10453/194568</link>
    <description>Title: Exploring the Feature Extraction and Relation Modeling For Light Weight Transformer Tracking
Authors: Zheng, J; Liang, M; Huang, S; Ning, J
Abstract: Recent advancements in transformer-based lightweight object tracking have set new standards across various benchmarks due to their efficiency and effectiveness. Despite these achievements, most current trackers rely heavily on pre-existing object detection architectures without optimizing the backbone network to leverage the unique demands of object tracking. Addressing this gap, we introduce the Feature Extraction and Relation Modeling Tracker (FERMT) - a novel approach that significantly enhances tracking speed and accuracy. At the heart of FERMT is a strategic decomposition of the conventional attention mechanism into four distinct sub-modules within a one-stream tracker. This design stems from our insight that the initial layers of a tracking network should prioritize feature extraction, whereas the deeper layers should focus on relation modeling between objects. Consequently, we propose an innovative, lightweight backbone specifically tailored for object tracking. Our approach is validated through meticulous ablation studies, confirming the effectiveness of our architectural decisions. Furthermore, FERMT incorporates a Dual Attention Unit for feature pre-processing, which facilitates global feature interaction across channels and enriches feature representation with attention cues. Benchmarking on GOT-10k, FERMT achieves a groundbreaking Average Overlap (AO) score of 69.6 , outperforming the leading real-time trackers by 5.6 in accuracy while boasting a 54 improvement in CPU tracking speed. This work not only sets a new standard for state-of-the-art (SOTA) performance in light-weight tracking but also bridges the efficiency gap between fast and high-performance trackers. The code and models are available at https //github.com/KarlesZheng/FERMT.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194545">
    <title>When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge?</title>
    <link>http://hdl.handle.net/10453/194545</link>
    <description>Title: When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge?
Authors: Wang, S; Zhu, T; Ye, D; Zhou, W
Abstract: The deployment of large language models (LLMs) like ChatGPT and Gemini has shown their powerful natural language generation capabilities. However, these models can inadvertently learn and retain sensitive information and harmful content during training, raising significant ethical and legal concerns. To address these issues, machine unlearning has been introduced as a potential solution. While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting. To address these limitations, we propose a lightweight behavioral unlearning framework based on Retrieval-Augmented Generation (RAG) technology. By modifying the external knowledge base of RAG, we simulate the effects of forgetting without directly interacting with the unlearned LLM. We approach the construction of unlearned knowledge as a constrained optimization problem, deriving two key components that underpin the effectiveness of RAG-based unlearning. This RAG-based approach is particularly effective for closed-source LLMs, where existing unlearning methods often fail. We evaluate our framework through extensive experiments on both open-source and closed-source models, including ChatGPT, Gemini, Llama-2-7b-chat, and PaLM 2. The results demonstrate that our approach meets five key unlearning criteria: effectiveness, universality, harmlessness, simplicity, and robustness. Meanwhile, this approach can extend to multimodal large language models and LLM-based agents.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

