<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://hdl.handle.net/10453/148704">
    <title>OPUS Collection:</title>
    <link>http://hdl.handle.net/10453/148704</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194585" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194580" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194548" />
        <rdf:li rdf:resource="http://hdl.handle.net/10453/194493" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-05T22:55:25Z</dc:date>
  </channel>
  <item rdf:about="http://hdl.handle.net/10453/194585">
    <title>Decoding Data: a Complete Guide to Business Intelligence</title>
    <link>http://hdl.handle.net/10453/194585</link>
    <description>Title: Decoding Data: a Complete Guide to Business Intelligence
Authors: Atif, A; Qureshi, MA; Jha, B; Mwagwabi, F; Papini, M
Editors: Atif, A</description>
    <dc:date>2026-03-02T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194580">
    <title>Scaffold or shortcut? Postgraduate IT students’ use of generative AI and self-regulated learning</title>
    <link>http://hdl.handle.net/10453/194580</link>
    <description>Title: Scaffold or shortcut? Postgraduate IT students’ use of generative AI and self-regulated learning
Authors: Atif, A; Dickson-Deane, C
Abstract: Generative artificial intelligence (GenAI) tools such as ChatGPT and Copilot are increasingly integrated into higher education, where students use them to summarise texts, solve problems, and generate code. While these tools are can reduce cognitive load and improve learning efficiency, they may also challenge students’ ability to regulate their learning (i.e., self-regulated learning; SRL) by encouraging surface-level engagement and overdependence. This study investigates how GenAI shapes SRL behaviours within a postgraduate information technology (IT) subject/unit/course. A mixed-methods design was employed with 267 students, combining pre- and post-semester surveys with semi-structured interviews. The study examined how students engaged with GenAI and how this affected SRL components of goal setting, monitoring, and self-evaluation. Findings show varied patterns: some students used GenAI to clarify goals, check understanding, and reflect on progress, while others relied on it as a shortcut, outsourcing monitoring and evaluation. The study highlights GenAI’s dual role as a scaffold and shortcut, offering insights for designing learning environments that foster productive use and sustain student agency and autonomy.</description>
    <dc:date>2026-03-31T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194548">
    <title>Cobweb Privacy: a Novel Mechanism for Comprehensive Association Privacy Protection in Data Aggregation</title>
    <link>http://hdl.handle.net/10453/194548</link>
    <description>Title: Cobweb Privacy: a Novel Mechanism for Comprehensive Association Privacy Protection in Data Aggregation
Authors: Li, Y; Xu, L; Li, J; Fang, H; Yu, S
Abstract: The issue of association privacy leakage has become increasingly critical during data release and usage. However, traditional privacy protection techniques often struggle to address privacy leakage resulting from implicit associations within the data. In this paper, we propose a novel mechanism based on Cobweb Privacy to safeguard association privacy more comprehensively. Firstly, we design the concept of ϵ-Cobweb Privacy (ϵ-CP) specifically to address association privacy leakage. This concept extends the traditional notion of differential privacy by incorporating associated prior knowledge, thereby offering more effective and comprehensive protection of association privacy. We further demonstrate its privacy guarantees through the theoretical analysis of the relationship between ϵ-CP, differential privacy, and pufferfish privacy. Secondly, we quantify the privacy leakage problem mathematically and examine the utility privacy trade off under various priors. Additionally, we present a universal framework for association privacy protection in data aggregation scenarios using the ϵ-CP mechanism. Finally, this framework is integrated with three different noise addition methods and compared against mechanisms based on differential privacy and pufferfish privacy, and its utility is validated through experiments on both non-temporal and temporal real-world datasets. The results show that ϵ-CP provides distinct advantages in the utility privacy trade-off.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10453/194493">
    <title>A obustness fication Tool for uantum Machine Learning Models</title>
    <link>http://hdl.handle.net/10453/194493</link>
    <description>Title: A obustness fication Tool for uantum Machine Learning Models
Authors: Lin, Y; Guan, J; Fang, W; Ying, M; Su, Z
Abstract: Adversarial noise attacks present a significant threat to quantum machine learning (QML) models, similar to their classical counterparts. This is especially true in the current Noisy Intermediate-Scale Quantum era, where noise is unavoidable. Therefore, it is essential to ensure the robustness of QML models before their deployment. To address this challenge, we introduce VeriQR, the first tool designed specifically for formally verifying and improving the robustness of QML models, to the best of our knowledge. This tool mimics real-world quantum hardware’s noisy impacts by incorporating random noise to formally validate a QML model’s robustness. VeriQR supports exact (sound and complete) algorithms for both local and global robustness verification. For enhanced efficiency, it implements an under-approximate (complete) algorithm and a tensor network-based algorithm to verify local and global robustness, respectively. As a formal verification tool, VeriQR can detect adversarial examples and utilize them for further analysis and to enhance the local robustness through adversarial training, as demonstrated by experiments on real-world quantum machine learning models. Moreover, it permits users to incorporate customized noise. Based on this feature, we assess VeriQR using various real-world examples, and experimental outcomes confirm that the addition of specific quantum noise can enhance the global robustness of QML models. These processes are made accessible through a user-friendly graphical interface provided by VeriQR, catering to general users without requiring a deep understanding of the counter-intuitive probabilistic nature of quantum computing.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

