Matches in Ghent University Academic Bibliography for { ?s <http://schema.org/abstract> ?o. }
- 01GMVB15VWH99DMW9YDHHGJR75 abstract "Hydrogen gas has gathered much interest as an energy carrier for present and future clean energy needs. It will play an important part in the storage and transport of energy produced by renewable sources. One economically justifiable method for the transport of hydrogen gas is the use of existing pipeline networks. However, the uptake of hydrogen in steel structures is known to cause a degradation of mechanical properties, potentially decreasing toughness and inducing cracks. This phenomenon is called hydrogen embrittlement (HE). Since the existing pipeline systems were not designed with high-pressure hydrogen gas transport in mind, its suitability needs to be investigated. In this study, the hydrogen properties of two grades pipeline steel, a `vintage' API 5L X56 and a `modern' API 5L X70 are investigated. The influence of hydrogen on their mechanical behavior is evaluated by tensile tests on uncharged and hydrogen charged specimens. These tests are performed on notched round bar specimens with different notch radii, allowing for a range of positive stress triaxialities to be examined. Hydrogen charging is done electrochemically and two different hydrogen levels are used: both steels are tested at the same hydrogen concentration and for the X70 steel, a higher hydrogen concentration is considered as well. Hydrogen signatures in the form of fisheyes and quasi-cleavage are detected on the fracture surface. The influence of different stress triaxialities on fisheye formation is investigated, as well as the role of inclusions or other microstructural features as initiation points for fisheyes. Based on the notched specimens' reduction in area, the vintage X56 steel appears slightly less or similarly sensitive to HE compared to the higher strength, modern X70 steel for similar hydrogen levels. In the X70 steel, a higher hydrogen concentration results in a more severe embrittlement as well as more fisheyes on the fracture surface.".
- 01GMVB15W0D8Q5CW1GE42FVTRG abstract "The goal of the circular building strategy is to have a lower environmental and financial impact than the traditional linear way of building. To prove this is in fact the case, it is necessary to be able to calculate the environmental and financial impact of circular versus linear building elements. The environmental impact of buildings, building elements and materials can be determined through Life Cycle Assessment (LCA) and the financial impact can be calculated through Life Cycle Costing (LCC). There are established standards in the building sector on how to execute LCA and LCC studies. However, currently many LCA and LCC studies based on these standards are not set up to evaluate circularity. On the one hand, circular principles are not evaluated. On the other hand, usually only one life cycle scenario and fixed parameter values are considered, while it is very difficult to predict what will happen over the long lifespan of a building element. Especially for circular building elements, which enable more scenarios (adaption, reuse, recycling,…) than linear ones. The goal of my PhD research is to develop an LCA and LCC method that can evaluate the environmental and financial impact of circular (versus linear) building elements. The focus lies on incorporating variable parameters and including multiple scenarios throughout the lifespan of a building element. An important research question is which of the parameters and scenarios are the most determining for the environmental and financial impact. These parameters will be important boundary conditions to ensure circular building is environmentally and financially more beneficial than linear building.".
- 01GMVB1ARJN9PP3C5VTZXKRB2N abstract "Electric actuation is used in a wide variety of applications and fields. One of the fields is prosthesis actuation where energy efficiency and mass are important characteristics. A lot of applications within this field (e.g. knee prosthesis, ankle prosthesis,...) require a high torque at low speed and a low torque at high speed. These motion requirements are difficult to effectively combine into a single electric motor. A possibility is to use an oversized direct--drive motor that can deliver the peak torque. However, this results in a heavy actuation system and low overall efficiency. A common practice is combining a smaller electric motor with a gear reduction system. In literature, novel redundant electric actuation systems were presented with or without locking mechanisms. In this work, multiple electrical actuation systems composed of electric motors, gear reduction systems and locking mechanisms are compared to each other. This is done using the gait cycle of a human ankle as a case study.".
- 01GMVB1ARQ7FP9RAGF3FS242ZZ abstract "This paper proposes an algorithm that detects skiers and flags during a ski slalom race, by using object detection on video footage. The algorithm uses the detection of skiers and flags to implement gate-to-gate timing and analysis. The result of the detection and gate timing is used to analyze metrics such as the total time of the skier's race, the time between every two consecutive flags and anomalies in the skier's trajectory. Furthermore, the data of all skiers competing in the slalom race can be used to create a heatmap of the trajectory and to calculate average statistics. To detect the objects, a custom-trained YOLO model is used. Training is done on a custom created dataset, containing images from various slalom races together with footage recorded on an indoor ski slope. Compared to the current equipment for gate timing using infrared photocells, video based timing is less accurate, as the algorithm is on the one hand limited by the frames per second of a video and on the other hand by the accuracy of the object detection. However, for recreational and training purposes, it delivers accuracy within a reasonable margin and can provide quick feedback.".
- 01GMVB1ARTTMZX7ZV3THBBQCN9 abstract "Data visualization recommendation aims to assist the user in creating visualizations from a given dataset. The process of creating appropriate visualizations requires expert knowledge of the available data model as well as the dashboard application that is used. To relieve the user from requiring this knowledge and from the manual process needed to create numerous visualizations or dashboards, we present a context-aware visualization recommender system that automatically recommends a personalized dashboard to the user, based on the system they are monitoring and the task they are trying to achieve. Through a knowledge graph-based approach, expert knowledge about the data and the application is included to improve the recommendation process. Preliminary results show a promising performance of the presented recommender system, validating its ability to assist the end-user in visualizing the most relevant information and reduce the time required to manually create dashboards.".
- 01GMVB1FN91X8PXZAEAB2N3SDF abstract "Design Space Exploration is an important concept in engineering design in which the design space is being explored for suitable design candidates. Most of the time there are design specifications which limit the design space to a particular area of interest called the feasible region. To identify the feasible region(s), time-consuming simulations are run that characterize the design space and its constraints. A cheaper and faster alternative to the expensive simulations is to use surrogate models which are data-efficient machine learning models. The training samples for the model can be chosen either without using any knowledge of the underlying design problem, but this could lead to many simulations that are of no use to the engineer because they are infeasible; or by using active learning. Active learning is an iterative process in which the samples are chosen in an intelligent way, so that the required performance of the model is achieved with as few samples as possible. In addition, active learning can focus on simulating designs that are of interest for finding the feasible region. However, in the case of a design process with many specifications, i.e., a highly constrained problem, the search for the feasible region can become even harder. When there are many constraints, the feasible region can be very small and even scattered across the design space. This makes it sometimes impossible to find the feasible region(s), even with an active learning strategy. Therefore, the aim of this work is to improve current state-of-the-art methods so that they can also be used successfully for feasible region identification in highly constrained design spaces.".
- 01GMVB1FNEJHPBGN9WW7MSFA4Y abstract "Building envelopes are sensitive to climate conditions. Changes to the climate conditions may alter the hygrothermal behaviour of building envelopes, and thus their performance in terms of deterioration. Therefore, it is important to assess the impact of climate change on damages in building envelopes. We performed 35.000 hygrothermal simulations of solid masonry walls over 10 locations across Europe. We included a range of parameter variations (e.g. masonry thickness, walls with/without interior insulation, brick type etc.). The climate change signal per location and damage mechanism results in a distribution, meaning that not all parameter combinations result in the same climate change signal. For example, there may be an increase in freeze-thaw damage for some parameter combinations, and a decrease for other combinations at the same location. Here, we presented the 90th percentile of the climate change signal. In this way, 90\% of the cases have a smaller or negative climate change signal. Based on the results, some damage risks are projected to increase, whereas others remain constant. Furthermore, the impact of climate change is not the same for different damage mechanisms and/or locations over Europe. Especially in the north of Europe, the risk on moisture-related damages is projected to increase. The results in this study are based on one model (i.e. the REMO regional climate model), and one scenario of projected greenhouse gases (i.e. RCP 4.5). Future studies should focus on more models and scenarios.".
- 01GMVB1FNJ47TR67PX9Y3576YF abstract "Tissue engineering is a rapidly emerging aiming at overcoming the limitations of conventional transplantation methods through development of substitute structures so-called scaffolds for the restoration of damaged tissues. One of the promising approaches towards a successful tissue repair is fabrication of a scaffold mimicking the fibrous structure of the extracellular matrix (ECM) that governs a wide range of crucial cellular performances such as adhesion and proliferation. Electrospinning can be used as a simple, versatile and cost-effective biofabrication technique able to produce polymeric fibrous meshes simulating the ECM morphology. Several biodegradable polymers have been employed in the generation of scaffolds with polycaprolactone (PCL) being by far the most considered material due to its non-toxicity and history of safe clinical use. Despite the afore-mentioned advantages, PCL is a hydrophobic material that does not exhibit desirable bio-chemical properties promoting cell-surface interactions. Therefore, a surface modification putting into effect appropriate biochemical properties can solve the issue. The generation of thiolated surfaces is gaining a huge popularity in TE applications since thiol groups (-SH) are known to serve as highly selective anchoring sites for the subsequent covalent immobilization of biomolecules. This was previously done via self-assembly approaches that ahave some drawbacks such as the use of organic solvents, the long reaction times and the multi-step procedure. As an alternative, plasma-assisted polymerization of thiol-containing precursors can be employed to deposit thiol-rich coatings onto the scaffolds in a fast, solvent-free and eco-friendly way. In this research, PCL nanofibers are first using electrospinning. A plasma polymerization process was then performed to deposit a thiol-rich coatings on the electrospun fibers using a dielectric barrier discharge (DBD) with 1-propanethiol as precursor. An optimization of the process was carried out via an extensive parametric study involving the discharge power, gas flow rate, treatment time and pressure inside the DBD chamber.".
- 01GMVB1MJ1SGHMPT7S27TY23SE abstract "The bed shape of many tidal basins in coastal regions is characterized by complex patterns of channels and shoals, which exhibit cyclic behaviour, resulting from the interplay among water motion, sediment transport and bottom changes. A depth-averaged (2DH) idealized model is developed to directly identify morphodynamic equilibria in short semi-enclosed meso-tidal inlet systems with arbitrary planform geometries using a continuation approach. By means of this model, the morphodynamic equilibria in short tidal basins are systematically studied, with a specific focus on their sensitivity to the Coriolis effect and variations in planform geometry. The model results show that in the case of a sufficient degree of widening in the landward direction, the equilibrium bed profile exhibits significant lateral structures, characterized by a central shallow ridge flanked by two deeper channels. The first channel bifurcation, as observed in many short tidal inlet systems, is forced by the planform geometry of the basin and is sensitive to the exact planform geometry considered. The associated physical mechanisms are explained. Furthermore, two mechanisms inducing asymmetric morphodynamic equilibria are investigated, of which the effect of an asymmetric basin planform seems to be dominant over that of the Coriolis force.".
- 01GMVB1MJ6K19AQ30BPYX8JVAE abstract "The global data traffic, whether wired or wireless, will not stop growing. The worldwide fixed internet traffic is increasing by 2-fold every 3-4 years. Therefore, faster optical communication systems are demanded for the transmission of larger volumes of data. A typical digital coherent optical communications system contains digital-to-analog converter (DAC) to convert digital signals from digital signal processors (DSPs) to analog signals. These DACs are generally fabricated in the complementary metal oxide semiconductor (CMOS) process for integration with DSPs fabricated in the same process. Nowadays, CMOS DACs are facing speed bottlenecks due to their limited analog bandwidth of around 56 GHz, when used for high speed transmitter architectures targeting beyond 100 Gbps. This PhD is exploring for solutions to overcome such bandwidth limitations to drive off-chip optical modulators with higher speeds. One of the solutions, which this PhD research is investigating, is to combine CMOS technology with a faster III-V technology namely Indium-Phosphide (InP). That is, all complex architectures like PRBS15, delay-blocks, duty-cycle correction (DCC) circuits and predistortion techniques like feed-forward equalization (FFE) and PAM4-level adjustments were implemented in 28nm planar CMOS to leverage from its high yields. Then, to extend the bandwidth and to target higher speeds, a 4:1-Analog Multiplexer (AMUX) was implemented in 250nm InP-DHBT technology. Next, it is planned to combine this 4:1-AMUX with the aforementioned CMOS DACs such that speeds beyond 224 Gbaud symbolrates are achieved with 400mVpp output voltage swing. A further feature this 224/448 Gbps NRZ/PAM4 wireline CMOS-InP-transmitter module offers is the capability to equalize not only the channel of the CMOS DACs' output, but also the channel at the output of the first InP 2:1-AMUX to maintain signal integrity.".
- 01GMVB1MJAVES0J3WHDJ2GNQ8E abstract "From both the research community and industry there is an ever-growing interest in the application of UAVs (more commonly known as drones) for use in e.g. agriculture or infrastructure inspection. If a commercial UAV solution comes equipped with a sensor for mapping, it is typically a single RGB camera or a LiDAR system. However, different types of sensors provide different, often complimentary, kinds of information. For example, LiDAR sensors yield geometric information about a scene, whereas an RGB or hyperspectral camera provides colour information. Although much research has been conducted to extract geometric information (3D models) from (2D) images via photogrammetry, this process is extremely complicated and slow and could be sped up immensely by incorporating LiDAR information directly. Therefore, ideally we should equip a UAV with all kinds of different sensors. For this reason the UAV Research Centre URC has constructed a sensor rig and is developing a pipeline for the joint processing of the different data modalities, in particular LiDAR and spectral information. In this poster we provide a high-level overview on some design aspects such as the optimal orientation to mount a LiDAR, and the extrinsic calibration.".
- 01GMVB1SESZ15JCFCYP5KYBP1N abstract "The demand for data and accompanying services is ever-increasing, which poses several challenges in managing data driven systems. This led to the creation of data ecosystems, which facilitate creating, consuming, managing and exchanging data for varying purposes and actors. However, when setting-up such an ecosystem, knowledge on what the requirements of the ecosystem should be in order to get (the most) economic value out of its data is still limited. There is a need for support in the decision processes that come with designing a data driven ecosystem. In order to fill this gap, a taxonomy is being made of the different types of ecosystems there (can) exist, in a parameterized manner. These (discrete) parameters should provide a clear overview of the defining characteristics of data driven ecosystems. Once these parameters are established, a crucial next step in this taxonomy analysis is to identify the most feasible ways value can be created by mapping value creation methods to parameter values. This should allow data ecosystems to be designed according to value creation needs. Additionally, existing ecosystems could be analysed in order to discover missed potential value, or even reveal the need for a strategy change.".
- 01GMVB1SEY156GBMCY13EQW91H abstract "After hundred or thousand years of weathering, those distinctive building fa\c cade are very vulnerable to the changing environment and requires regular maintenance and preservation to keep it long lasting. However, In recent years, the investment in heritage conservation and renovation demonstrates an increasing tendency both financially and materially due to the climate change issues and the large amount of historic buildings. To better preserve those buildings is to understand how it deteriorated. Because of that we have to understand the characteristics of building materials and how they response to the environment. How the heat, air and moisture migrate in the building envelop and cause the deterioration in the fa\c cade? How different climate scenarios will impact the deterioration in building fa\c cade and how the climate change and future climate projection may influent the deterioration? . To make this projection convincing we have to do a large amount simulation to simulate various conditions until finally we build a large database and make a decision tree in a scientific and statistic way.".
- 01GMVB1SF2ET512VZJNZY920YF abstract "Continuously increasing amounts of solid plastic waste (SPW) are being generated worldwide. Chemical recycling techniques, e.g. pyrolysis, show a great potential to reduce the environmental harm induced by SPW pollution. These techniques can be combined with innovative chemical processes driven by renewable energy, resulting in carbon circularity. Detailed kinetic modeling is required to understand and optimize these processes. The main aim of this work is to develop a detailed kinetic model for the thermal pyrolysis of vinyl polymers. The model combines a detailed explicit simulation of the chemistry of the light species with the Method of Moments to simulate heavy species. The model contains 9 reaction families, 45 species, and 66 moments. The explicit simulation and MoM are linked via a dedicated routine, based on the gamma distribution. The model simulations are performed with a new computational framework that implements an evaporation model, the exact simulation of the light species, the MoM-based simulation of the heavy species, and the links between these modeling regimes. The constructed preliminary model is validated against detailed pyrolysis product distributions. The preliminary model results yield a proof of concept for this new modeling approach.".
- 01GMVB1YBKWXVSD6MNQPCE44BF abstract "Over the last years, manufacturing companies have experienced an increasing demand for more complex products with an increasing amount of product variants. The high complexity and variant in the manufacturing line bring the need for more expertise level for operators. For making sure that the operator can be adapted to this need, adequate and timely training and guidelines and real-time support of workers in manufacturing area is required. For having a better manufacturing process, operators should receive appropriate and updated process and safety training and guidelines to decrease the risk. So, one of the main challenges in the smart industry is providing smart guideline and support for the operators based on their need. Having the smart assistance systems in the manufacturing environment can lead to higher quality and more efficiency. One way to provide these support can be using the question answering system. A QA system is a system that gives appropriate answers to questions expressed in natural languages. The advantage of QA systems in the assembly line is that operators have the luxury of asking queries in natural language and also get a precise answer instead of just displaying a list of links to documents that may or may not be relevant or waiting for a long time for the supervisors . Most studies in the QA system concern the open domain. The restricted domains already available in the research are mainly related to medical care. Based on our knowledge, for the assembly domain, there are still gaps related to having a QA system for supporting the operators in the assembly line. The text-based operator support in the assembly domain is not widely explored in state-of-the-art.".
- 01GMVB1YBQR1WEH9WHEXKSJDZB abstract "Abstract: α-1 Acid glycoprotein (AGP) is one of the most abundant plasma proteins. It fulfills two essential functions: immunomodulation, and binding to various drugs. These different functions are closely associated with each other via changes in the glycosylation of AGP in response to inflammation. From a structural point of view, glycans alter the local biophysical properties of the AGP leading to a diverse ligand-binding spectrum. There is evidence that the AGP's glycosylation can be utilized as a potential biomarker for the prognosis of cancer. However, the resolved X-ray crystallography structure of AGP does not contain glycans, which are not observed due to their high flexibility and microheterogeneity. Thus, it limits our understanding of AGP's conformational dynamics 70 years after its discovery. Also, no studies have linked the effect of AGP glycosylation to cancer missense mutations. Here, we investigate the conformational dynamics of AGP with respect to missense mutations and their effect on glycosylated and un-glycosylated forms of AGP by using Molecular Dynamics (MD) simulation, and sequence-based dynamics predictions. The MD trajectories show that glycosylation decreases the local backbone flexibility of AGP and increases the flexibility of distant regions due to allosteric effects. Moreover, glycosylation compacts the backbone slightly and shields the protein surface reducing its accessibility. We observe that when a residue is mutated close to the glycosylation site, glycan's conformational preferences are affected. Thus, we conclude that mutations control glycan dynamics, which modulates the protein's backbone flexibility directly affecting its accessibility.".
- 01GMVB1YBW3MT3D9FQSV1F4XDH abstract "Electric motors with random wound stator windings with identical nameplate values do not have identical high frequency values, even when consecutively produced in one single batch. For this, eight motors with identical nameplates (seven from one single batch and consecutive serial numbers) were measured using a Hioki RLC measuring device and fitted to a model. The variation in behavior was observed in the raw measurement as well as in the fitted parameters. It was found that the parameters are normally distributed and show similar variation as the raw data. The variation observed was not only in between motors but also inside one motor causing asymmetry in between the three phases of one motor. In order to make high frequency models more widely applicable for mass-produced electrical motors, a large measurement campaign is required to accurately map the parameters and their variation. This will allow future simulations using statistical values e.g. mean and standard deviation allowing conclusions to be drawn for a motor type rather than one particular case.".
- 01GMVB238ATJ3C19ZQC21ZXCVP abstract "Since the external facades of our architectural heritage are protected and not allowed to be modified, our patrimony will have to be insulated from the inside during the renovation wave. However, interior retrofitting drastically changes the hygrothermal behaviour of a wall. These new conditions can induce different moisture-related problems such as freeze-thaw damage, mould growth, wood decay, condensation and corrosion. Whether or not damage will occur is dependent on many different parameters. Heat Air and Moisture simulations (HAM) can provide relevant insights for interior retrofitting practice but are rarely addressed. By creating a methodology to convey the results and insights from hygrothermal simulations to the building industry, decision-making for building practitioners will improve regarding risk management when interior retrofitting.".
- 01GMVB238F7FACCMW0Z9QEEPVV abstract "An LVDC backbone with aggregated PV and BESS is a promising grid architecture leading to a considerable reduction of the energy losses. Additionally, previous analysis showed that the replacement of the multiple small DC/AC inverters in a traditional LVAC grid by one central DC/AC inverter creates many other scale-related advantages. The operating voltage level is an important parameter therein as it has a direct impact on the cable and conversion losses. As the PV and BESS voltages and powers exhibits a high variability due to the natural intermittency of solar irradiance, potential benefits emerge when a dynamic backbone voltage optimisation is applied instead of a static voltage. Moreover, by extending the objective of the optimisation with the maximisation of the produced PV power, the DC/DC converter, which guarantees the maximum power point (MPP) operation of the PV system, could be eliminated. The dynamic voltage strategy will hence be driven by a multi-objective optimisation algorithm in order to define the Pareto front between minimising the cable and conversion losses and maximising the PV yield. The elimination of the DC/DC converters would lead to a reduction of the cost, the space and the complexity of the installation. The obtained results exhibits a clear decrease of 53% in energy losses when applying a dynamic voltage strategy with MPP tracker (MPPT). This is mainly caused by the decrease in cable losses and conversion losses in the second stage DC/DC of the central grid inverter. Eliminating the MPPT leads to a slight increase of the losses which is a consequence of the compromise between the predefined objectives. However, this slight loss increase combined with the 1% curtailment loss opens perspectives to simplify the PV installation and to reduce the balance of system cost.".
- 01GMVB238JP8SRZJ49G5B0SA8F abstract "By means of vibrotactile and force feedback, i.e., haptics, users are given the sensation of touching and manipulating virtual objects in interactive Extended Reality (XR) environments. However, research towards the influence of this feedback on the users' perception and performance in interactive XR is currently still scarce. In this work, we present an experimental evaluation of the effects of haptic feedback in interactive immersive applications. By means of a Projected Augmented Reality (PAR) setup, users were asked to interact with a projected environment by completing three different tasks based on finger-tracking and in the presence of visual latency. Evaluations were performed both subjectively (questionnaire) and objectively (i.e. duration and accuracy). We found out that while haptic feedback does not enhance the performance for simple tasks, it substantially improves it for more complex ones. This effect is more evident in presence of network degradation, such as latency. However, the subjective questionnaires showed a general skepticism about the potential of incorporating haptic information into immersive applications. As such, we believe that this paper provides an important contribution toward the understanding and assessment of the influence of haptic technology in interactive immersive systems.".
- 01GMVB2854B8827YKTB6VNSYM7 abstract "Silica nanofibrous membranes are flexible, light-weight and porous materials that showcase excellent thermal and chemical resistance. However, the production via a combination of sol-gel synthesis and electrospinning depends on many factors of which the influence is not completely understood. In this work, we investigate the link between the viscosity and the electrospinnability of the tetraethylorthosilicate (TEOS) based spinning solution as a first step in linking the different scales in the process. The viscosity of the silica sol has a clear effect on the behavior of the Taylor cone and hence on the stability of the electrospinning process. Exploiting these insights is of great importance for future chemically driven tuning of silica nanofibrous membranes.".
- 01GMVB2859MVSAJ905M6PGNR9Y abstract "To provide improved recommendations for selecting an adequate lubrication method for tapered roller bearings (TRB's), detailed measurements of global friction torque and temperature distribution in TRB's are performed as function of different speed and loads. Moreover, different lubrication methods are investigated, considering different oil quantities in the bearing housing. The experimental data are compared with two existing frictional models, i.e., Harris and SKF models, and their range of applicability is also assessed. In this work an in-house bearing test setup, is used to simulate different operating conditions (speed and load). The bearing temperature is measured using thermocouples mounted on an outer stationary raceway, whereas the friction force is measured by a load cell .".
- 01GMVB285DMHG5VTA5Y451Q33J abstract "Steel support structures of offshore wind turbines (jackets and monopiles) undergo both fatigue and corrosion damage, impacting their lifetime. Due to the time‐variant uncertainties associated with environmental and mechanical loads, having reliable models that allow prediction of the degradation due to corrosion and fatigue is necessary to accurately assess the structural integrity and to support decision-making. This work investigates how pitting corrosion, caused by being exposed to the marine environment, affects the fatigue strength of structural steel. A short crack model is used to estimate the minimum required applied load amplitude which causes a growing crack emanating from the bottom of a semi-elliptical pit. The modeling results show the fatigue strength degradation as a function of the exposure time to the corrosive environment. As exposure time increases, it is observed that degradation happens more quickly in the early years followed by a convergence of the fatigue strength to a minimum value. ALso, a parametric study is done to see the effect of the pit size and sharpness of the degradation of the fatigue strength. It illustrates that for a specific pit sharpness by increasing the pit size the fatgiue strength decreases sharply at first and then tends to converge to a specific value which depends on the sharpness of the pit.".
- 01GMVB2D1Y5PPZ7FR05FY6NE63 abstract "A possible scenario of fire inception in microgravity is ignition and burning of solid or liquid materials followed by the flame growth and spread. In this scenario, as well as in normal gravity, flammability of the material is directly coupled with the stability of gaseous flame. Until recently, transient dynamics and stability of the low momentum diffusion flames emanating at the flat surface into the stagnant atmosphere in microgravity has not been investigated. To date, we even could not predict whether this kind of flame can attain its steady state. This poster presents an overview of the orbital experiment BRE-Flamenco performed onboard the International Space Station in 2019-2021, jointly with the theoretical and numerical analysis of the experimental results. The key findings of this work disclose the major effects of the reduced gravity on laminar diffusion flames, including dramatic increase of the radiative fraction in the energy balance casing flame extinction. This flame extinction is identified as the radiative one, which is opposed to the strain-induced blow-off occurring in normal gravity. The flame is shown to be inherently unstable: the initial phase of steady flame growth is interrupted by local flame extinction followed by flame oscillations (of toroidal, spiral, capuchin type) and complete extinguishment. It has been established, that a steady flame can only exist at high oxygen concentration and low fuel supply rate (characteristic of solid or liquid combustibles), and the critical condition for the steady flame to exist has been proposed. Approximate theory developed in this work is shown to predict the critical pre-extinction flame size, while the transient flame dynamics including formation of triple flames is favorably replicated in detailed CFD simulations.".
- 01GMVB2D23XCF4GFR3XMGS9GGJ abstract "An intrusion detection system (IDS) is a security monitoring system capable of detecting potential attacks exceeding the firewall. Traditionally, these systems rely on pattern matching and signatures of known attacks, but in the last decades, security researchers have shifted their interest to machine learning (ML) techniques with varying degrees of success. While the published literature reports outstanding results, we are still waiting for the breakthrough of operational ML-IDS in the real world. To address this issue, we investigate the source of the problem, the datasets themselves, and propose guidelines to work with existing benchmark datasets to improve the generalization power of the ML models. This study evaluates the generalization strength of a ML model using the novel inter-dataset evaluation strategy compared to the traditional intra-dataset strategy. The results show high classification performance on the training dataset but fail to transfer this to a second related but unseen dataset with an average drop in accuracy of 25\%. The proposed inter-dataset evaluation strategy empowers future research to estimate the generalization strength of ML solutions closer to a real-world setting.".
- 01GMVB2D278ZX1JZ1D5XCSBY4E abstract "This study has developed a numerical tool capable of performing mooring design optimization analysis for floating wind turbine (FWT) platforms. The numerical model is capable of: i) correctly model the physical behavior of a moored FWT platform and ii) run at a low computational cost (i.e., faster than real-time) allowing for the evaluations of many different mooring configurations. The numerical model is based on the Cummins-Ogilvie equation of motion. These hybrid frequency-time-domain approach, allows to perform fast time-domain simulation of floating bodies including external forces such as mooring lines or viscous effects. Firstly, the platform's excitation force and hydrodynamic coefficients are obtained in the frequency domain utilizing a Boundary Element Method (BEM) solver. Secondly, the results from the BEM solver are converted to the time-domain via convolution integral of the impulse response functions to calculate the radiation force and inverse discrete Fourier transform to compute the excitation force. Thirdly, viscous effect are considered via the Morison Equation using empirical coefficients. Finally, the mooring lines are modelled according to the lumped-mass approach, omitting the coupling between internal nodes allowing the system to be transformed into numerous ordinary differential equations that are solved individually. The established numerical tool can be further coupled with a single and/or multi-objective optimization methods (e.g., genetic algorithm, particle swarm optimization, etc.) to perform a mooring design loop for an FWT platform achieving the optimal configuration in terms of platform motions, mooring line tensions and cost.".
- 01GMVB2HYPMNT4M94DZ063XTPG abstract "Short fibre-reinforced thermoplastic composites are increasingly popular in the automotive industry thanks to low cost, short cycle times and recyclability. They can be easily injection-moulded into complex shapes. However, compared with thermoset composites or metals, they are more sensitive to environmental and loading conditions. The aim of this study is to characterise the tensile behaviour of injection-moulded short glass fibre-reinforced polyamide 6 (PA6) under conditions that are representative of in-service applications. This is done by varying fibre orientation, temperature, humidity, and strain rate. The material is heterogeneous and has a complex microstructure. This calls for a dedicated measurement setup during tensile testing. Strains are mapped full-field on the specimen surface by means of stereo digital image correlation (DIC). An infrared (IR) camera is synchronised and calibrated together with the DIC cameras, so that the temperature field can also be tracked on the deformed surface. This gives insights into coupled thermo-mechanical phenomena like self-heating due to plastic dissipation, enabling correct interpretation of the material response. This technique could prove equally suitable for cyclic loading and other classes of materials, when inhomogeneous strain fields and thermal effects are likely or expected to occur.".
- 01GMVB2HYTS1KR30K5BACR05Q7 abstract "In recent years, the continuous growth of Internet traffic is boosting the demand of short-reach optical links in data centers, vastly driven by new applications such as Cloud Computing, Virtual Reality/Augmented Reality, and Internet of Things. According to a recent report from Cisco, most of the datacenter traffic stays within the mega datacenters, further increasing the demand of faster optical interconnections. To date, intensity-modulation/direct-detection (IMDD) solutions still dominate the market of short-reach optical interconnects, however experiencing lofty challenges in scaling up beyond 100Gbps per wavelength (i.e. 50GBaud PAM-4 modulation). Coherent detection offers high spectral efficiency and receiver sensitivity, but conventional DSP-based coherent receivers have prohibitively high cost and power consumption for intra-data-center links. This PhD research focuses on increasing the speed of these links. We have designed a transimpedance amplifier that aims at a symbol rate of 100GBaud. In the first instance, I will use this chip in an IMDD receiver configuration. After that, balanced photodiodes will be used to extend the receiver to coherent formats. In addition, ``DSP-lite coherent detection'' techniques using analog/mixed-signal processing with an optoelectronic phase-locked loop promise to drastically lower power consumption and complexity compared to DSP-based solutions.".
- 01GMVB2HYYA4TA22Y5F67V76R1 abstract "Today's data centres, which underpin software such as cloud computing, artificial intelligence and machine learning, rely on optical transceivers to move information between servers. Hundreds of thousands of such transceivers are installed yearly. Despite this being a mass market, assembly of electronics and optics into a transceiver is still a step-by-step process, which makes it difficult to increase throughput and drive down cost. The EU-funded Caladan project is harnessing micro-transfer printing technology to move from sequential assembly to a manufacturing process in which thousands of transceivers are assembled in one parallel step. At the transmit side, a Mach-Zehnder Modulator (MZM) driver IC is designed to be compatible with the micro transfer printing process by limiting its footprint to a mere 200µm by 300µm. Such a small footprint allows for dense integration with the long (typically a couple of mm) MZMs implemented in imec's silicon photonics platform (iSiPP). However, innovation at the device level only is not sufficient to support the increasing demands for more bandwidth. System level developments towards coherent communication links are hampered by the need for power hungry DSP to correct the phase error between the received signal and the LO at the receiver. One way to completely mitigate this problem is by detecting the phase difference between subsequent symbols (by way of a delay line interferometer), instead of detecting the phase with respect to a fixed reference laser (LO). Analysis shows that optical systems using Differential Phase Shift Keying (DPSK) offer a viable alternative to LO-based coherent systems.".
- 01GMVB2PVWBKCBWVD8NQFX4C2V abstract "A surgeon's most crucial task when repairing a broken scaphoid bone is to bring it back together and screw it into place accurately. Wrist anatomy is complex, so the surgeon needs a comprehensive view of wrist bones. X-ray imaging is one of the most convenient ways to do this, but it can't be used repeatedly because it is harmful to humans. In addition, the surgeon receives 2D information from the x-ray imaging, which makes it difficult for him or her to interpret the information in 3D for the patient's wrist. The purpose of the project is to accurately display a hologram of a patient's wrist bones on the patient themselves using augmented reality technology.".
- 01GMVB2PW1YCN0222JPG4FA98Q abstract "Water distribution companies around the world struggle to meet the demands of an ever-increasing population. In Flanders, approximately 60 million m3 of drinking water is lost every year due to leaks in the drinking water grid. In times of water scarcity, losing so much treated water is unacceptable. A key approach to reduce leak losses is by combining hydraulic modeling and machine learning to localize leaks in a hybrid model-based and data-driven manner. We present our hybrid leak localization methodology, which also integrates human feedback from drinking water consumers into its predictions. Our methodology is evaluated on 10 leak experiments in real-time. We show that leaks can be localized down to the street-level, with each leak location prediction needing less than 2 hours of computing time. As a result, our solution enables Flanders to save approximately 24 million m3 of water per year.".
- 01GMVB2PW5GS7RS901SCN9V0BB abstract "In the energy transition towards a low carbon economy, the EU strives to provide a hydrogen grid of almost 23000 km by 2040. 75\% is expected to be obtained by retrofitting existing pipelines for the transport and storage of hydrogen gas. However steels are prone to hydrogen embrittlement, i.e., hydrogen reduces their ductility and fracture toughness. The engineering assessment criteria to accept or reject weld imperfections are to a large extent based on those mechanical properties. Therefore, the effect of hydrogen on mechanical properties of welds, and particularly their heat affected zones, needs specific attention in the research on structural integrity of welded joints. Devoted tensile and fracture toughness test methods are developed and tests are performed on (welded) samples with and without hydrogen charging. Laboratory scale tests are used to calibrate an in-house developed numerical diffusion-degradation-damage model of welded parts. Scale effects in hydrogen embrittlement sensitivity will be investigated by up-scaling the experiments. The numerical model will be validated by component scale, so-called curved wide plate, mechanical experiments. Using the validated model, existing acceptability criteria for (girth weld) flaws will be evaluated and updated for hydrogen transport.".
- 01GMVB2VRMS8P549HZ2QZ8PHA0 abstract "Vibrations in mechanical systems are often undesirable. They could lead to failure and/or disruption of proper operation of the system. Hence, mitigation of vibrations is indispensable. Two major classes can observed: passive and active vibration mitigation. The former does not rely on sensors and actuators and is for that reason assumed to be less complex, more reliable with regard to failure, and more intuitive to tune. The latter allows more design flexibility, can be adaptive to changes over time, and is more compact. In the class of active vibration absorption, a feedback loop containing a control strategy is used to connect the sensors with the actuators. Many complex, mathematical controllers already exist that have proven to be very effective to decrease vibrations in a system. However, interpretability of the controller parameters can be lacking and, thus, impede an intuitive tuning strategy, because of its abstract nature and the rapid increasing number of controller parameters. The flexibility of active control allows to imitate a passive tuned vibration absorber with extra design freedom: all types of nonlinearities and interconnections can be created. This could lead to a vibration control strategy that combines the advantages of passive and active vibration mitigation. The first step in designing an active nonlinear controller is to guarantee stability. Therefore, it is necessary to proof the ranges of the controller parameters to achieve an asymptotically stable system. Different methods to proof stability can be used and will lead to different limitations for the controller. In this work, the direct method of Lyapunov is used to prove stability based on an extended Lure type Lyapunov function and a straightforward energy based Lyapunov function. Both stability proofs will be compared with respect to the interconnection system/EL-controller they yield and the corresponding controller design freedom they offer.".
- 01GMVB2VRSG40997DK22NHW2WK abstract "Over 2/3rd of all newly developed drugs are rendered useless because they don't dissolve in our stomach. Their high crystallinity prevents any active ingredients from entering our system. For this reason, increasing the aqueous solubility of poorly soluble drugs is highly researched. Solvent electrospinning of a solution of the flubendazole and poly(2-ethyl-2-oxazoline) is demonstrated to be a viable strategy to produce stable nanofibrous amorphous solid dispersions with ultrahigh drug-loadings (up to 55 wt\% flubendazole). Importantly, at such high drug loadings, the concentration of the polymer in the electrospinning solution has to be lowered below the concentration where it can be spun in absence of the drug as the interactions between the polymer and the drug result in increased solution viscosity. X-ray diffraction results and in vitro drug release studies confirm the remarkable amorphous stability and potential of this electrospinning formulation strategy by significantly increased drug solubility values and dissolution rates even after storing the formulation for one year.".
- 01GMVB2VRZQS773YDHH8W1EPV6 abstract "High confinement tokamak fusion devices are characterized by magnetohydrodynamic instabilities that occur near the edge of the plasma, called edge-localized modes (ELMs). While not necessarily catastrophic, the consequent losses in temperature and energy, as well as the wear and tear to the wall and plasma-facing components pose a significant risk. Previously, a robust peak detection algorithm was developed to find ELMs, extract their properties (timing, losses, etc.), and predict their behavior on a range of tokamaks, such as JET, ASDEX Upgrade and DIII-D, with minimal manual tuning. Recently, modes of operation with smaller, but more irregular ELMs have been gathering interest. However, detecting these erratic ELMs consistently with existing tools has been challenging. Recently, machine learning methods such as support vector machines and 1D convolutional neural network classifiers have been used to recognize anomalous events. Inspired by these tools, new ELM detection methods are proposed and their performance is compared with the existing algorithm.".
- 01GMVB30NDKR39W0DFEW6MN84G abstract "Surface interaction in abrasion of multiphase materials such as cermets by hard abrasive particles is a complex phenomenon that requires in-depth understanding at a micro scale. The different characteristics of abrasive particles such as size, hardness and shape creates asymmetric surface topographies on the worn surface. The resultant wear mechanism from abrasive particle is a combination of brittle and ductile failure mode which is associated with different phases of the material. It is essential to understand the fundamental insight and the damage process that causes inconsistence in the scratch patterns in multi-asperity contact. Such interaction is easily mapped using a scale down laboratory studies for understanding the dominating variable (operational and material) and their associated damage mechanisms. In this background, the current investigation evaluates the response of two similarly hard WC-15.6Co (WC-Co) and NbC-12Ni-10Mo2C (NbC-Ni) multiphase cermets during three-body abrasive wear, which was experimentally simulated by a dry sand/rubber wheel test. The study has been performed according to ASTM G65 standard with different characteristics of abrasive particles such as size (67-245 $\mu$m), shape (round and angular) and hardness (Silica, Alumina and SiC). The rate of changes in the wear is explained based on the observed wear micro-mechanisms of the worn surface. The wear micro-mechanism and associated surface damage were characterized by SEM-EDS and 3D white-light interferometer. Special focus was made on characterizing the circularity and roundness factor to correlate the engagement and wear pattern made by the abrasive particles. The present study has developed new insight into the wear resistance property of cermets against different abrasive characteristics and provides a better understanding of the transition of wear micro-mechanisms during three-body abrasive wear.".
- 01GMVB30NKC4ZDMTTGCW01PDKY abstract "During processing, polymers are often subjected to high temperatures and high shear rates during the (long) residence times inside the machine. This can cause the material to degrade, leading to structural changes in the polymer chain and thus an overall change of material properties. The main sorts of degradation that occur for (co)polyesters include thermal, oxidative, and hydrolytic degradation, which can lead to discoloration due to functional groups formation, crosslinking, or chain scission. Chain scission, being the main degradation mechanism for PET, refers to the rupture of polymer chains and can be induced by the presence of oxygen, water, or high temperatures. It results in smaller polymer chains, leading to a decrease of molecular weight of the polymer and consequently deterioration of material properties. This contribution examines the influence of three different viscosity grades on the degradation during processing of PETG. By analyzing the inherent viscosity and melt flow index of the material before and after processing, changes in the flow behaviour can be determined.".
- 01GMVB30NQ8WSC1CQYA0881359 abstract "An important challenge in the valorization of CO2 and H2 into fuels is the development of a stable, reusable and easy to handle heterogeneous catalyst. Here, a silica nanofibrous membrane is investigated as carrier for Ru nanoparticles, themselves encapsulated inside the metal organic framework Cr-MIL-101. The catalytic membrane is investigated for the Sabatier methanation reaction. The direct electrospinning of a tetraorthosilicate sol-gel system results in a highly thermal resistant silica nanofibrous structure (up to 1100°C) with a large amount of pores between the fibers in the µm-range, allowing a high gas throughput with low pressure requirements. A straightforward dip-coating procedure of the carrier was used to obtain a Ru@MIL-101 functionalized silica nanofibrous veil, avoiding Ru clustering. The obtained catalytic membrane exhibited an apparent turnover frequency of 3257 h-1 at 250°C. This system therefore paves the way towards structured reactors for efficient CO2 hydrogenation processes.".
- 01GMVB35JBTE11ARWQJZRWT9H4 abstract "The `oxygen paradox' can be explained as two opposing biological processes with oxygen (O2) as a reactant. On the one hand, oxygen is essential to aerobic metabolism, being used by mitochondria to fuel oxidative phosphorylation. On the other hand, an excess supply of oxygen will generate reactive species which are harmful for the cell. To maintain oxygen homeostasis, the first process should be maximized compared to the second one. We have hypothesized that curved and cholesterol-enriched membrane invaginations called caveolae help maintain the proper oxygen level by taking up oxygen and attenuating its release to the mitochondria. The mechanism by which caveolae may help to buffer the oxygen level in cells is still unclear. Here, we aim to assess how structural aspects of caveolae---namely the curvature and the cholesterol content of the membrane---influence the local oxygen abundance and the membrane permeability. We have modelled liposomes with varying size (curvature) and with varying ratios of phosphatidylcholine (POPC) and cholesterol, using molecular dynamics simulation. Associated changes in the oxygen free energy profile and permeability will be presented.".
- 01GMVB35JFVQ0ANYQJVGBRXR5J abstract "Both thermodynamic selectivity and kinetic selectivity are important determinants for the in vivo efficacy of candidate drug molecules. Computational tools to predict thermodynamic and kinetic parameters are therefore necessary for the screening stage of the drug-design pipeline. As the timescale of most drug (un)binding processes is out of reach for conventional molecular dynamics simulations, specialized methodologies are needed to determine the (un)binding rates and residence times of protein-drug complexes. To assess the kinetics, most methods require the definition of a reaction coordinate that accurately describes the progression of the (un)binding process. As these (un)binding pathways are not known a priori, it is possible that free energy barriers are hidden along degrees of freedom (DoFs) that are orthogonal to the low-dimensional and user-defined reaction coordinate, resulting in inaccurate kinetics. In this study, Replica Exchange Transition Interface Sampling (RETIS) [1] is used to investigate the orthogonal DoFs in the unbinding pathways of imatinib to the kinase protein ABL (wild type and mutated variants). RETIS is an exact path sampling method, which generates trajectories using shooting moves, and which accepts or rejects trajectories using a Monte Carlo algorithm. RETIS is reaction-coordinate independent and produces reactive trajectories as if they originated from a long molecular dynamics simulation. As such, the hidden energy barriers should be captured by the paths generated by RETIS, which will be presented here. Reference [1] R. Cabriolu, K. M. Skjelbred Refsnes, P. G. Bolhuis, and T. S. Van Erp, ``Foundations and latest advances in replica exchange transition interface sampling,'' J. Chem. Phys., vol. 147, no. 15, 2017.".
- 01GMVB3AEXSKFSQX7AK4SGHAHR abstract "Challenges in clean water availability have risen over the years, and especially third world countries are in great need of localised, low-cost water purification techniques, that make use of separation membranes. Electrochemical treatments in (waste)water management show high potential in the global water resource crisis, but are often limited by the performance of the ion-exchange membrane (IEM). Low chemical resistance and fouling are major issues in the development of the next generation IEMs. An interesting choice of material for IEMs is the use of nanofibers due to their outstanding ionic properties as a result of their specific morphology. Nanofiber membranes are known to have a large specific surface area, flexibility, high porosity and interconnected pores. Different strategies are applied for the production and structural design of these ion-exchange nanofiber membranes. Nanofibers with an ion-exchange functionality can be produced by either pre- or post-functionalization methods, combined with electrospinning. Depending on the application, these nanofiber mats can be used as such, or further membrane processing is possible to improve the dimensional stability, typically by adding a pore-filling matrix in between the nanofibers. By producing IEMs from hybrid nanofibrous membranes containing both organic and inorganic parts, a wide range of different membrane properties can be obtained by altering the molecular structure. This results in IEMs with high thermal and chemical resistance as well as tunability towards a.o. mechanical properties and hydrophobicity for the use in harsh environmental conditions. For example, sulfonated silica-based nanofiber cation-exchange membranes (CEMs) offer a promising solution to the current issues of IEMs, due to their superior chemical resistance and self-cleaning properties.".
- 01GMVB3AF4JSYND33Q7W4CCM1T abstract "In this research we aim to investigate the job shop scheduling problem with uncertain processing times. First we study the literature for the best solution approaches for the deterministic job shop problem. Among all developed algorithms in the literature the famous TSAB algorithm by Nowicki and Smutnicki in 1996 has shown well performance regarding the quality of solutions and CPU time. Then we develop and solve a two stage stochastic programming model, using TSAB and Sample Average Approximation technique, assuming the data are uncertain and associated with a well-known distribution. Furthermore, for more complex uncertainties we study robust and distributionally robust optimization techniques.".
- 01GMVB3AF7H3NNPRHQWNJBHEMC abstract "In recent years, industrialization and urbanization have increased the emissions of volatile organic compounds (VOCs). The term ``VOCs'' refers to carbon-based chemicals exhibiting a significant vapor pressure at ambient conditions. In addition to causing air pollution, VOCs can also threaten human health. Among the available technologies for mitigating VOC emissions, catalytic oxidation stands out as a promising solution. A deep understanding of oxidation mechanism via fundamental kinetic modeling will allow exploiting the synergies between the catalytic material and operating conditions, aiming to develop optimal technologies for the abatement of VOCs. In general, deep catalytic oxidations of VOCs are explained by three models, namely the Langmuir− Hinshelwood (L-H), Eley−Rideal (E-R), and Mars−van Krevelen (MVK) models. In addition, a power law kinetic model can be employed as the first approach for further selection among the mentioned mechanistic models. In this work, methane was chosen as a VOC model compound to investigate its kinetic behavior over $\beta$cyclodextrin-Cu/hydroxyapatite ($\beta$Cd-Cu/HAP), applying a power-low approach. The temperature (350 -- 450 °C), inlet partial pressures (CH4:2.9 -- 8 kPa, O2: 16 -- 48 kPa), and spacetime (150 -- 550 kgcat s molCH4,0) were varied to investigate the impact of operating conditions on reaction performance. The kinetic parameters were obtained using the power law model as a first step to finding the best mechanism for the deep oxidation of methane over the tested conditions. The optimized reaction orders for methane (0.6) and oxygen (0.1) showed that methane oxidation depends less on gas-phase oxygen, meaning that oxygen contained inside the catalyst plays a larger role. The power-law model parameters also helped to eliminate 27 models (out of 69) and identify LH and MVK as possible mechanisms.".
- 01GMVB3FBNZJPX7T2WKTB5G5E9 abstract "Robotic cloth manipulation is challenging due to its deformability, which makes determining its full state infeasible. However, for cloth folding, it suffices to know the position of a few semantic keypoints. Convolutional neural networks (CNN) can be used to detect these keypoints, but require large amounts of annotated data, which is expensive to collect. To overcome this, we propose to learn these keypoint detectors purely from synthetic data, enabling low-cost data collection. In this paper, we procedurally generate images of towels and use them to train a CNN. We evaluate the performance of this detector for folding towels on a unimanual robot setup and find that the grasp and fold success rates are 77\% and 53\%, respectively. We conclude that learning keypoint detectors from synthetic data for cloth folding and related tasks is a promising research direction, discuss some failures and relate them to future work. A video of the system, as well as the codebase, more details on the CNN architecture and the training setup can be found at https://github.com/tlpss/workshop-icra-2022-cloth-keypoints".
- 01GMVB3FBTKV244610RYJ3P66Z abstract "Cooperative multi-agent manipulation systems allow to extend on the manipulative limitations of individual agents, increasing the complexity of the manipulation tasks the ensemble can handle. Controlling such a system requires meticulous planning of subsequent subtasks, queried to the individual agents, in order to execute the master task successfully. Real-time (re)planning is essential to ensure the task can still be achieved when subtasks execution suffers from uncertainty or when the master task changes intermittently requiring real-time reconfiguration of the plan. In this work we develop a supervisory control architecture tailored to the cooperation of two robotic manipulators equipped with standard pick-and-place facilities in the plane. We control the planar position and orientation of an object using two underactuated manipulators so that only the position of the object can be controlled directly. The desired orientation follows from the accumulation of alternating relative angles. A time-invariant policy function is trained using deep reinforcement learning, which can determine a finite sequence of pick-and-place maneuvers to manipulate the object to a desired configuration. Two policy architectures are compared. The first uses the kinematic model to determine the final step, whilst the second policy makes this decision itself. The more information is given to the policy the easier it trains. In return, it becomes less adaptable and loses some of its generalisability.".
- 01GMVB3FBYSDEENV59B9AVX6ZS abstract "Newspapers contain a wealth of historical information in the form of articles and illustrations. Library and cultural heritage institutions have been digitizing their collections for decades to enable web-based access and retrieval of information. Since digitized collections only include page-level metadata, it is challenging to find information about a specific context. In order to address this challenge, article-level information was segmented based on layout analysis techniques. Afterward, an image classification algorithm is used to filter articles that contain building images. An image's caption can provide information about its location. To validate the location information, a textual query is used to search Google for recent images of a historical location. By matching image features, irrelevant results will be filtered out. The historic images were then rephotographed, which is a notoriously challenging and complex process. These results can also be used by researchers in the humanities and cultural heritage to investigate spatiotemporal changes within a region or place.".
- 01GMVB3M8CTYT9035ZC9FMQCBP abstract "Finding alternatives to fossil resources, aiming to fulfil the needs of our society in a sustainable way, is one of the foremost challenges in the 21st century. Within this context, lignocellulose biorefining is a promising technology, enabling a route to renewable chemicals, materials and energy. The lignin-first (LF) biorefinery concept has recently emerged to complement traditional carbohydrate biorefining, with focus on lignin valorization. The reductive catalytic fractionation (RCF) of lignocellulosic feedstocks is a LF biorefining process that combines biomass fractionation, via solvolytic transformations, with heterogeneously catalyzed lignin depolymerization and stabilization, having a high potential towards the sustainable production of added-value building blocks from biomass. In this poster a computational modelling methodology is presented to investigate the reductive catalytic fractionation of lignocellulosic biomass with focus on lignin transformations from its native form in biomass towards the formation of bio-aromatics. First, reaction-diffusion models, relying on simple kinetics, probe the diffusion of lignin model compounds to identify kinetically-limited regimes. Second, two kinetic models based on the continuum theory of lumping are proposed to describe lignin solvolysis and depolymerization, which will be validated using existing and newly acquired experimental data. Finally, the developed models will be combined with molecular reconstruction methods, enabling the structural characterization of key LF products (up to light oligomers).".
- 01GMVB3M8JCYBQRFJ02QF7VRGW abstract "In mammals, kidney diseases are a prevalent condition. In dogs, the occurrence of chronic kidney disease is assessed to be between 0.5-7\% (Lund EM. Et al). Renal fMRI is a non-invasive technique that can quantify renal function but is not routinely used in dogs. In this pilot study, we assess whether renal perfusion and oxygenation can be used in dogs. Three healthy Beagles were imaged on a Siemens PrismaFit 3T. For one Beagle the scan session was repeated one week later. For kidney perfusion, Renal blood flow (RBF) is measured, using Arterial Spin labeling (ASL) sequence (Siemens WIP ASP 1023H). For kidney oxygen consumption, a multi-echo GRE sequence with 12 equally spaced echo times, was used to generate Blood oxygen level-dependent (BOLD) contrast. RBF maps were computed on the scanner. R2* maps were estimated in MATLAB-R2021b. TLCO (Twelve-Layer Concentric Objects) method (Piskunowicz M. et al) was implemented to calculate mean R2* and RBF in outer and inner layers of each kidney. Measured RBF [ml/min/100g] in outer layers of right (R) and left (L) kidney was, R: 324.42 $\pm$ 63.83 and L: 341.02 $\pm$ 77.25 respectively. For inner layers, RBF was R: 139.76 $\pm$ 58.91 and L: 161.81 $\pm$ 59.58. Measured BOLD [s-1] in outer layers of the kidney was R: 30.70 $\pm$ 7.19 and L: 23.97 $\pm$ 3.02. For inner layers, BOLD was R: 18.69 $\pm$ 4.64 and L: 19.00 $\pm$ 5.01. These pilot scans confirm the viability of ASL-BOLD-MRI for canine renal assessment. The RBF values obtained in the cortex are similar to the ones obtained in previous ASL studies. The BOLD-MRI measurements in dogs tend to diverge and this sequence requires more refinement. This pilot study allows to compare RBF and BOLD among several subjects. These results are complementary upcoming study, with double sessions for 10 dogs.".
- 01GMVB3M8QAZ1JS4AWC8527WZ3 abstract "Weathering of construction materials is one of the main causes for building deterioration and failure. Amongst some of the most important factors contributing to this outcome is the presence of salts in the system and their crystallization/dissolution behavior during ageing. These salts occupy the space within the pore network of the materials and manifests itself only once some damage has already been caused. It is of outmost importance to develop efficient methods for testing, evaluating and development of damage prediction of this phenomena to obtain better and more durable materials. This research intends to employ two contamination regimes to test the durability of the mortar mixtures. X-ray micro-computed tomography will be utilized as main tool for the assessment of damage due to salt crystallization on lime-based mortars in a non-destructive manner.".
- 01GMVB3S5515MSVSH62QG01GZW abstract "In context of the current energy crisis, improving the efficiency of highly energy consuming processes is more relevant than ever. One of the most energy-intensive chemical processes is the separation of alkanes and alkenes, which accounts for 0.3\% of the world's energy consumption (Stephenson, 2019). The most common way to separate alkanes and alkenes is via cryogenic distillation. However, a significant improvement in energy retention could be achieved by applying room temperature sieving of the molecules. For this purpose, membranes made out of nanoporous crystalline materials known as zeolitic imidazolate frameworks (ZIFs) will be investigated for their high selectivity in separation of ethane from ethene (C2) and propane from propene (C3) species at room temperature. In this work, the nature of C2/C3 separation by means of a diffusion analysis in ZIF-8 is investigated. To examine the diffusion behavior of individual molecules, computer simulations that solve Newton's equations of motion on a molecular level using classical potentials for the inter- and intramolecular interactions are performed. The diffusion path of an alkane or alkene molecule through ZIF-8 is modeled with advanced MD simulation techniques. The result of this simulation is the thermodynamic free energy and its enthalpic and entropic contributions as a function of the progress along the diffusion pathway. From this free energy profile a diffusion rate can be determined. This means that the structure of the molecular system and the diffusion rate of guest molecules can be correlated. By applying this modeling procedure for a number of different host materials and guest species, more thorough insight into the effects of different structural elements on hydrocarbon diffusion can be obtained, which is an essential element in the design of new material for a highly efficient separation of alkanes and alkenes.".
- 01GMVB3S5ARP0A5Y1APZCZDGCE abstract "High-capacity multi-die FPGA systems generally consist of multiple dies connected by external interposer lines. These external connections are limited in number. Further, these connections also contribute to a higher delay as compared to the internal network on a monolithic FPGA and should therefore be sparsely used. These architectural changes compel the placement \& routing tools to minimize the number of signals at the die boundary. Incorporating a netlist partitioning step in the CAD flow can help to minimize the overall number of cross-die connections. Conventional partitioning techniques focus on minimizing the cut edges at the cost of generating unequal-sized partitions. Such highly unbalanced partitions can affect the overall placement \& routing quality by causing congestion on the denser die. Moreover, this can also negatively impact the overall runtime of the placement \& routing tools as well as the FPGA resource utilization. In previous studies, a low value of the unbalance was proposed to generate equal-sized partitions. In this work, we investigate the factors that influence the netlist partitioning quality for a multi-die FPGA system. A die-level partitioning step, performed using hMETIS, is incorporated into the flow before the packing step. Large heterogeneous circuits from the Koios benchmark suite are used to analyze the partitioning-packing results. Consequently, we examine the variation in output unbalance, the number of cut edges and die-size vs the input value of unbalance. We propose an optimal parametric value of the unbalance factor based on the findings for achieving the desired partitioning quality.".
- 01GMVB3S5EB5BMFQAKA91W9J2T abstract "Bandwidth demand is ever-increasing and expected to keep increasing for at least 15 years. One of the main ways to provide such bandwidth is through installing fiber to the home (FTTH). We propose and test three metaheuristic approaches to extend a single-level FTTH network design heuristic to multiple levels. Each heuristic is evaluated on realistic graphs with over 30000 nodes and 800 terminals. We find that all metaheuristics provide a small but significant gain in quality at the cost of more running time. Since fiber installment can be expensive, the metaheuristic improvements can be a valuable extension for FTTH network designers.".
- 01GMVB3Y1X2XVAEBZP0B7XZNKK abstract "Real-world problems usually consist of two or more conflicting objectives; hence there is no single solution that optimizes all of the objectives simultaneously. At the end of the optimization process, there is a set of solutions, each of which is better in one objective and worse at least in another one. In other words, they do not dominate each other. The final non-dominated solutions are called the Pareto front in the objective space. Multi-objective optimization requires many evaluations to identify a sufficiently dense approximation of the Pareto front. Especially for a higher number of objectives, extracting the Pareto front is neither easy nor cheap. At the same time, not all the solutions in the Pareto front are attractive to the Decision-Maker, and they might prefer a solution where there is a desirable trade-off between different objectives. An example of such an attractive solution is the knee point of the Pareto front, although there is no unique definition of the knee point in the literature. Additionally, all the current research on finding the knee point is based on evolutionary algorithms, which evaluate the objective functions many times; hence they are not data-efficient. Although they are still preferred when the objective functions exhibit a high non-linear behaviour or the cost of each evaluation is neither time-consuming nor expensive. In this work, we propose to detect knee points in a data-efficient manner (i.e., with a limited number of time-consuming evaluations) based on two different definitions of where a knee point is. Particularly, we propose several novel acquisition functions in the Bayesian Optimization framework for detecting these knees, which allows for scaling to many objectives. The suggested acquisition functions are evaluated on various benchmarks with promising results.".
- 01GMVB3Y223S7C9CCTT7QT845K abstract "With the significantly growing demand for electric vehicles and more electric aircrafts, pure electric drive systems will be used as an alternative to internal combustion engines. For this target, a more-flexible production process is required in the short term. Lightweight electrical machines with high power density are emerging as feasible solution to combine between high performance and lower environmental impact. Metal additive manufacturing (AM) has been growing remarkably in the past few years. Thanks to the advantages of unmatched flexibility and zero material waste, this clean technology opens the door for new design solutions with greater material efficiency, which are not possible through conventional machining techniques. With the ultimate freedom in geometry, AM enables new possibilities in the design of electrical machines. This includes the 3D printing of passive parts such as housing, rotor shaft, bearings, and heat exchangers for direct cooling inside slots. This also has extended to the 3D printing of active parts such as the core and the windings. In this study, the main focus is on AM for electrical machines, combining light-weight material with high electromagnetic performance especially at high operating frequencies. This is very interesting for the automation industry, since there is a trend to go to high speeds and high numbers of poles, leading to high frequencies.".
- 01GMVB3Y26WQ82FZF4YX915G6V abstract "3D printed cement-based materials have a multi-layered structure due to extrusion layer by layer and without vibration. Interlayer zones are considered weak connections. In order to improve interface quality and accelerate the application of extrusion-based 3D printed cement-based materials, a comprehensive understanding of the formation mechanism of layer interface zone is necessary. Here, we reveal the multi-scale characteristics of phase distribution at the layer interface zone in printed materials by optical microscopy and scanning electron microscopy with energy dispersive spectroscopy. Results show that compared to the matrix, the interface is not merely porous, but also deficient in aggregate and enriched in calcium hydroxide. In addition, the heterogeneous phase distribution dominates the interlayer bonding strength.".
- 01GMVB42YQZEW08BG2C054PHK0 abstract "In this study, the numerical modelling of the Thermo-Elasto-Hydrodynamic Lubrication (TEHL) contact is investigated. Flow model, structure, and lubricant rheology models have been developed in the OpenFOAM (extend version) package. Regarding complex and cavitating flow in TEHL contact, a Homogeneous Equilibrium Model (HEM) is used, including the thermal effect and variation in lubricant properties due to pressure, temperature, and shear rate. Also, the Molecular Dynamic (MD) can be employed to calculate the density and viscosity as a function of state variables and shear rate. Besides the linear elastic equation, the heat conduction equation is solved to describe the solid deformation and temperature distribution in the solid domain. Furthermore, a partitioned Fluid-structure interaction (FSI) methodology is employed to make a two-way coupling between fluid and solid regions. Also, along with FSI modelling, a Conjugate Heat Transfer (CHT) simulation is essential to precisely investigate the thermal behaviour of TEHL contacts. The developed TEHL model properly shows the involved physics in comparison with acceptable data for rolling-sliding 2D line contacts from the literature and provides trustworthy results in different operating conditions for 2D/3D smooth and rough contacts.".
- 01GMVB42YWGE0RZ2WMYW78TY9M abstract "The 5th generation of cellular networks promises unprecedented data rates. However, the technology is not there yet to keep this promise in high-density environments such as sports stadiums and shopping malls. A 750 Gbps aggregated data rate is foreseen in a medium-size stadium. To support this tremendous amount of data, a large number of antennas spread around the stadium are required. This to reduce interference allowing to serve multiple users at once using distributed beam steering and distributed Massive MIMO techniques. One of the main challenges remains how to interconnect all these antennas such that they can be jointly optimized for the environment. This project will combine efficient sigma-delta radio over fiber (SDoF) technology with efficient beam-forming systems capable of sending information to different users in different directions. SDoF converts a multi-level radio signal into a binary stream but can be recovered with simple filtering. As such, the reception is very easy. This project will extend this concept to support multiple streams simultaneously and perform beamforming on them. This can be done very efficiently due to the seemingly digital nature of the signal. Furthermore, this concept will also be extended towards the uplink path of the radio-link, which is currently missing in all SDoF demonstrations. Within this project, an integrated circuit will be developed that incorporates these techniques to validate them in a system demonstrator.".
- 01GMVB42YZ7M41FSKMW0A8VYYD abstract "For every project or application, it is important to select a material that exhibits the desired mechanical properties. To be able to do so, the performance of multiple materials needs to be determined via mechanical testing. Here, we present the ability to combine mechanical tests with microstructural observations, which gives new insight into the microscale deformation and fracture mechanisms present in materials. Thanks to Scanning Electron Microscopy (SEM) during testing, we are able to perform in-situ visualization of the microstructure of the material and how it responds to mechanical loads. SEM offers a high resolution, a wide range of magnification, and a higher field of depth than traditional optical microscopic techniques. With these new insights, materials can be tuned more efficiently towards their specific applications.".
- 01GMVB47VD3A5SZW2VPTAP11CH abstract "Collisions at high speed can damage robotic drivetrains. The risk for such a collision is particularly high for hybrid high-speed cobots that switch between collaborative mode and high-speed mode. An error by the human operator in collaborative mode can lead to catastrophic failure of the robot's hardware in high-speed mode. An overload clutch with integrated torque sensing and clutch decoupling detection is presented as enabling technology. Both joint torque sensing and clutch decoupling detection are realized with the same capacitive measurement hardware that consists of paired electrodes. A prototype device is experimentally validated through comparison with a reference torque sensor.".
- 01GMVB47VX1ZMS7DGG74G7PZD9 abstract "LiDAR provides high resolution detection of objects over large distances using light. However, fulfilling the performance metrics of mass-reproducible solid-state LiDAR is an ongoing challenge. Particularly, automotive forward-looking LiDAR requires projecting and receiving light at distances over 200m. Large frame rates, high angular resolution and wide field-of-view are essential to construct an accurate 3D point cloud of the surrounding environment. To have a sufficient photon count on the LiDAR detector, a collimated beam of high power needs to be emitted from the transmitter. To utilize LiDAR for the automotive industry, the device needs to be low cost, with small form-factor and low power consumption. Therefore, implementing LiDAR on photonic chips is a promising approach due to the low-cost volume production of CMOS foundry processes, the low power consumption of photonic components and the small chip-based form-factor compared to other LiDAR implementations. The two main functionalities that photonics contribute to the LiDAR engine are ranging (determining the distance to an object) and beam steering (pointing the beam in a certain direction). In our work, we focus on the latter functionality, based on the concept of optical phased arrays (OPAs). An OPA is an array of optical antennas where the amplitudes and phases of the emitted optical signals are controlled to manipulate the total far-field radiation pattern. This pattern corresponds to the light signal that is projected onto the object to be detected. Designing OPA architectures with high fill-factor and antennas with long length and high efficiency should make it possible to attain the field-of-view and resolution requirements of automotive LiDAR. Low power circuits of small form-factor require optimizing the photonic components to have low loss, small footprint, along with improving dense circuit routing techniques. Such improvements pave the way to build large scale photonic circuits for a fully integrated LiDAR system.".
- 01GMVB47W1FSW967S7T0YAERQ8 abstract "The migration of 4G to 5G cellular communication forced researchers to explore different techniques. One of them was looking towards mm-wave frequencies in search for more bandwidth. Looking forward to the needs for the next generation, 6G, even higher data-rates will be required and higher spectral bandwidth is needed to support this. This can be found around 140 GHz. However, at these frequencies, communication become even more challenging, requiring very large antenna arrays to concentrate radiated energy in the users direction to overcome the high path losses. This project will explore how novel assembly techniques can help to bring photonics, electronics and antennas closer together to build low cost but highly efficient large scale antenna arrays. The core will be a photonic beamforming chip, which will provide the signal processing to drive all antennas in the array. On top of this chip, tiny amplifiers will be assembled using micro transfer printing, allowing very close, low loss integration. Finally, these amplifiers will be interconnected towards antennas that are integrated on the back of the photonic IC completing a highly integrated antenna array. Following this approach, integration densities of 100 antennas / cm² are foreseen.".
- 01GMVB4CRF5PEENM5G2YZKAGXK abstract "Offshore wind turbine support structures undergo pitting corrosion due to the marine environment. Besides, these structures are subject to fatigue loads due to wind and waves. The corrosion-fatigue phenomenon is considered as one of the most dangerous damage mechanisms for offshore structures. Corrosion pits attract stress which is why they are prone to turn into the crack(s). The main objective of this work is to implement computational studies on pitting corrosion and pit-to-crack transition. For pitting corrosion simulations, the phase-field modeling method is chosen. On the other hand, in order to predict the potential location of crack initiation, the stress concentration factor (SCF) concept is used for which a linear elasto-static stress analysis is implemented using the finite element method.".
- 01GMVB4CRM5YK4W887AQ0J6YW9 abstract "In the tissue engineering field, the surface modification of hydrophobic nanofibers (NFs) to introduce cell-interactive chemical functionalities remains a challenge. Therefore, this study explores a novel three-step plasma based-method to synthesize coatings with a higher chemical selectivity in comparison to conventional plasma polymers. First, hexamethyldisiloxane (HMDSO) plasma polymerization was performed on NFs (blend of poly-$\epsilon$-caprolactone and poly(D,L-lactide-co-glycolide), after which the samples were exposed to a helium plasma. Both steps were performed in a medium-pressure dielectric barrier discharge. Scanning electron microscopy (SEM) showed that the plasma-based steps did not damage the NFs, while X-ray photoelectron spectroscopy (XPS) and water contact angle measurements showed that a hydrophilic silanol-rich layer was formed. This layer was used in the final step, in which (3-aminopropyl) triethoxysilane (APTES) was grafted on the plasma polymer to introduce primary amine groups onto the surface, which was confirmed by XPS. This APTES-based layer could be partially removed, but the remaining stable layer enhanced Schwann cell responses in live/dead, The 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide (MTT), immunofluorescent, and adhesion assays in comparison to the untreated and HMDSO-based coated NFs. As such, the fabricated coatings with selective functionality on PCL/PLGA NFs are a promising candidate to use in the tissue engineering field.".
- 01GMVB4CRRTAKDMACX3CEXY410 abstract "Fusion power plants will need to run in a reliable way, in order to maximize the power output and avoid delays due to unscheduled maintenance or damage to components. Predictive maintenance is an approach that can contribute to this requirement by periodic or continuous monitoring of the condition of equipment. The goal is to predict when the equipment will require maintenance and, ultimately, to provide an estimate of the remaining useful lifetime of devices and components. This allows better maintenance scheduling and can help avoiding damage due to equipment failure. In this work, we introduce a number of statistical techniques that can be employed toward predictive maintenance in fusion devices. We then present an application to circuit breakers in the JET Ohmic heating circuit, powering the central solenoid. These circuit breakers are an important cause of failed pulses at JET, because they endure physical fatigue due to the large currents required for plasma formation. Using voltage and current data in the circuit, we employ on-line Change Point Detection (CPD) algorithms to determine if the circuit breakers operate in a healthy or deteriorated stage. In general, this approach can provide an advance warning of the deteriorating condition of subsystems and components in fusion devices, allowing maintenance as needed and prevent asset damage.".
- 01GMVB4HN6H18Y961TD3WKPBGA abstract "Predictive models can be integrated in the sensing and monitoring methodologies of mechatronic systems in operation. When systems change or are subject to varying operating conditions, adaptivity of the models is needed. The goal of this paper is to enable this adaptivity by presenting a framework for continual learning. The framework aims to transfer and remember information from previously learned systems when a model is updated to new operating conditions. We achieve this by means of the following three key mechanisms. We first include physical information about the system, heavily regularizing the model output. Secondly, the usage of epistemic uncertainty, used as an indicator of the changing system, shows to what extend a transfer is desired. Last but not least the usage of a prior within a Bayesian framework allows to regularize models further according to previously obtained information. The last two principles are enabled thanks to the use of Bayesian neural networks. The methodology will be applied to a camfollower system in a simulation environment, where results show that previously trained systems are better remembered with an increase of 72\% compared to normal training procedures.".
- 01GMVB4HNB55YZW5B6F7YZHWHQ abstract "The wireless network evolution to smaller carrier wavelengths from (2G to 5G) could increase the RF-EMF absorption in insects. Near field RF-EMF exposure of insects to antennas' radiation has only been studied near a single antenna element. However, nearly all realistic base station antennas are multi-element arrays. In this research, we show using FDTD that at a separation distance of 10 cm from a dipole, the RF-EMF absorbed power (Pabs), in the range of 6-120 GHz in a western honey bee, maximizes at 12 GHz. Moreover, our results show that when irradiated by a phased array operating at 60 GHz, Pabs maximizes in the separation distance range of 0.1-2 cm, at which ¼ of the input power into the array can be absorbed by the honey bee.".
- 01GMVB4HNF79BSZG96K00WARM9 abstract "Many machine components, such as bearings and gears, operate under the so-called Thermo-Elasto-Hydrodynamic (TEHL) lubrication regime. In this specific regime, the surfaces in relative motion are entirely separated by a thin lubricant film ( There are two main types of MD simulations that are used to describe the dependence of thermomechanical properties on the state variables (pressure, temperature, and shear rate), namely the Equilibrium MD (EMD) and Non-Equilibrium MD (NEMD). While the EMD methods are typically used to model the Newtonian behavior of fluids in the limit of vanishing shear rates, the NEMD methods are used to model the non-Newtonian behavior at higher shear rates. In MD simulations, the accuracy is mainly determined by the accuracy of force fields used for describing the inter-atomic interactions in the system. When EMD simulations are used, apart from the accuracy of force fields, the specific way of postprocessing the EMD trajectories has a serious impact on the accuracy of the calculated properties. In this study, the EMD simulations are used to determine the viscosity of lubricants under wide pressure and temperature range. A reliable postprocessing procedure was developed to analyze EMD trajectories for viscosity calculations with increased accuracy and robustness.".
- 01GMVB4PHXSW6K3Y64RWJXRJKR abstract "The existing building stock is outdated, consumes a lot of energy, and is a major contributor to the global greenhouse gas emissions. Consequently, there is an urgent need for a transition of the existing building stock towards energy and carbon neutral buildings. There are three main pathways that could facilitate this transition: one-step deep energy renovation, step-by-step deep energy renovation, and demolition followed by new build. The importance of a sustainable transition, however, raises the question of how the environmental impact and financial cost of these three main pathways relate for different types of single-family dwellings. Several researchers have already searched for decision-making methods that integrate Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) to assess both environmental impacts and financial costs. However, a systematic comparison of the three pathways is lacking. Moreover, existing standards on how to perform an LCA are very conceptual and too vague to allow for a fair and consistent comparison of the three main pathways. This leaves a lot of freedom to LCA practitioners to make assumptions, implement simplifications and set their own boundary conditions. This can contribute to variations and contradictions in the trade-off between the pathways. This research project, therefore, aims to develop a well-defined and robust methodological LCA framework to compare the three main pathways in a fair and consistent way, provide insight into which pathways are most optimal for different types of single-family dwellings, and determine tipping points in the trade-off between the pathways to define more tangible and general building renovation guidelines.".
- 01GMVB4PJ2K2448NA2JDSVFT3A abstract "This poster presents a general outline of my PhD research, which takes as its starting point my ancestral village located in a peri-urban area of Nanhai district in the Pearl River Delta in China. Prevalent readings of the Delta focus on the striking spatial contrasts and large scale developments that have come about since the start of the economic reforms in 1978. Instead, my research starts from the observation that it is insufficient to view this area from a developmental point of view, which risks homogenising this multi-layered landscape. Informed by my experiences on the ground, I propose a leap from the quantitative territory to the qualitative landscape, to what Tim Ingold would call ``the world as it is known to those who dwell therein.'' I center ways in which people dwell, make inhabitable the world despite marked spatial changes. This is done on the one hand on the abstract level of representing this inhabiting through text and visualisations, on the other hand on the concrete level of inscribing the lived landscape through everyday activities, thus interpreting it as ``an enduring record of -- and testimony to -- the lives and works of the past generations who have dwelt within it, and in so doing, have left there something of themselves.'' Processes today are equally valued as another chapter in the ongoing life. Drawing from the field of landscape biography as proposed by the Dutch archaeologist Jan Kolen and colleagues, a longue dur\'ee approach is taken in order to grasp processes and transformations of Nanhai at different rhythms and on different timescales for three aspects: the cosmological, the ancestral, and the working landscape. My PhD is structured around these three aspects of landscape, each thus forming another lens and entryway. The poster shows a glimpse of each chapter.".
- 01GMVB4PJ5BFVV16J6JPFD7NXV abstract "In recent years, environmental efforts in the building sector were mainly concentrated on reducing the operational energy while a well-considered use of materials is at least as important to further reduce our ecological footprint. The concept `Urban mining' considers buildings as material banks, which can dismantled into valuable components and deliver materials to reuse in other building projects. This PhD aims to gain more insight in the reuse potential of building materials derived from single family and small multi-family dwellings, with a focus on social housing. In the first place, the typical construction assemblies and building materials in 10 selected case studies of different construction periods will be mapped. For these materials, technical criteria that impact the reuse potential (e.g. ease of disassembly, exposure, etc.) will be researched. Both the financial costs and the environmental impact of reuse (i.e. reclamation processes, storage, etc.) will be investigated for at least 10 building materials using LCC and LCA. Based on the technical criteria, LCC and LCA, a comprehensive framework will be developed allowing various stakeholders etc. to gain insight in the reuse potential of existing building materials.".
- 01GMVB4VEMBN0D0K0XSEY0SRVP abstract "Selective assembly is a production technique whereby components are smartly instead of randomly matched based on their measured feature values. Although it can reduce scrap by allowing more relaxed tolerances on components, it is often not adopted due to the high logistic costs related to measuring and matching operations. Therefore, we propose and investigate a hybrid assembly strategy whereby a number of traditional assembly rounds are performed before switching to selective assembly. At each round, the randomly selected components which did not lead to a feasible assembly are put into their respective unmatched components piles and are used for the next random assembly round. When N such rounds have been performed, the remaining unmatched components will finally be assembled via selective assembly. Due to these prior random assembly rounds, fewer selective assembly operations will eventually be needed, which considerably reduces logistic costs related to selective assembly. On the other hand, every additional round also implies random assembly trial costs. Hence, an optimal number of random assembly rounds before moving on to selective assembly has to be determined. Therefore, a cost function is developed together with an optimality condition. This cost function is expressed in terms of the matching probabilities at each round and the relative costs of selective and random assembly. Since the analytic calculation of the matching probabilities is in most of the practical cases not feasible, a simulation technique was developed to estimate these probabilities. Particular attention is paid to reducing the variance of the estimators for the matching probabilities, and to their impact on the total variability of the cost function. Results from the application of this innovative method to two study cases demonstrate that the proposed hybrid assembly technique can significantly reduce average assembly costs.".
- 01GMVB4VESN7JWACGXSH1GDE88 abstract "Spare parts demand forecasting has been an interesting area of research over the past years. This area of research is getting increasing attention due to the enormous costs associated with managing and owning spare parts, while spare parts availability is critical in industries where any process blockage will cost thousands up to millions of euros. To this extent, various spare parts forecasting techniques exist to decrease the stock level, without sacrificing the required service availability. These techniques mostly use historical demand data (time series) to forecast future demand. Lately, the fast technological development in sensors and communication technologies has paved the path towards real-time condition monitoring, which in turn enabled Condition-Based maintenance (CBM). In CBM, real-time condition information allows planners to anticipate future component failures, and maintenance actions are planned accordingly. However, few researchers have considered exploiting condition information in spare parts decision-making. Doing so could be beneficial because real-time condition data gives real-time information about future spare parts demand. We propose an inventory policy that exploits the condition information and pooling effect in spare parts decision-making. It anticipates the need for a spare to perform the maintenance action by ordering one when the degradation crosses an Order Threshold smaller than the part replacement threshold. We hypothesize that the proposed policy will exploit both condition information and the pooling effect to reduce the average stock level. The policy's two decision variables are the initial stock level and the Order Threshold. We evaluate this policy using Discrete Event Simulation (DES), and we develop a simulation-based optimization algorithm that explores the search space intelligently to find the optimal parameters. We show that the proposed proactive policy reduces the average stock level by 35\% on average.".
- 01GMVB4VEXC8VM11ZTWTZ1SGQT abstract "Wind speed variation imposes a strong nonlinearity and uncertainties on the dynamical behaviour of the wind turbine. Despite the fast computational time of linear controllers, they may not provide a feasible performance for the whole operating range. To tackle these issues, the Soft Switching Multiple Model Predictive Controller (SSM-MPC) is proposed when considering multiple linear models around various operating points (MPPs) approximating the nonlinear dynamics. The gap metric method is used to assess how close different linear models are with respect to each other. The proposed controller performance is studied and compare with PI controller and bidirectional control strategy through experiments on a 7.5 kW wind turbine drivetrain.".
- 01GMVB50BFBDDNCJCHY4P5RGN0 abstract "Due to scandals such as Cambridge Analytica, people have been confronted with the dangers of how personal data is being handled online nowadays. To address this problem, sir Tim Berners Lee (known as the inventor of the world wide web) introduced a new web specification called Solid. Solid allows people to securely store their data in decentralized data Pods (Personal Online Data Stores or data vaults). Any type of data can be stored in a Pod and the user controls what data is being shared with whom. Data in a Pod is saved in interoperable data formats and thus can be used across different applications. In this way, Solid decouples personal data storage from applications. In this experiment, a Solid Pod is used to store users' reading behaviour on a VRTNWS demo website. Based on the data that is stored in their personal data vaults, users are offered personalized news recommendations. We aim to measure the effects of the use of Solid data vaults on the users' perceived transparency, control, security and trust compared to the use of cookies to process personal data.".
- 01GMVB50BR2PG510X6VQR73A7W abstract "Remanufacturing of damaged parts can contribute to a more sustainable economy by reducing energy and materials consumption. For the dimensional restoration and structural repair of steel components, Wire + Arc Additive Manufacturing (WAAM) can be applied. Due to the high cyclical heat input during the WAAM process, a gradient in microstructure and mechanical properties will arise at the deposit-substrate interface. Methodologies are under development to investigate this heterogeneity in mechanical properties through sub-size quasi-static tensile tests and Compact Tension (CT) fatigue testing with extensive instrumentation. During tensile testing of specimens extracted parallel to the deposit-substrate interface, a periodic deformation pattern is observed in the strain field obtained through Digital Image Correlation (DIC). Surface height measurements of the failed specimens show that this pattern is related to the waviness of the interface due to the deposition of adjacent weld beads. CT specimens have been extracted with notches at different locations relative to the interface in order to investigate the fatigue crack growth rate, as well as possible crack path deviation.".
- 01GMVB5587DWFRDTAXD8CZ8DTE abstract "Cooling water systems are frequently used to control process temperature in industrial applications. To avoid performance issues due to material degradation, corrosion inhibitors are added. As a consequence, these chemicals are partly removed from the system in a blowdown and end up in the surface water bodies. Aiming to conserve aqueous ecosystems, this work investigates sustainable green corrosion inhibitors for cooling water applications. The inhibition efficiency of sodium silicate (SiO2:Na2O = 3) is evaluated for S235 carbon steel submerged in a 3 wt.\% NaCl environment under dynamic conditions. A set environment is controlled with a rotating cylinder electrode (RCE) with external heating while potentiodynamic scans (PDS) and electrochemical impedance measurements (EIS) are executed. Both PDS and EIS point out that an increasing silicate dosage results in a higher inhibition efficiency (IE), reaching a maximum of 98.61\% and 99.78\% for PDS and EIS respectively at 10 mM Na2Si3O7 at 25 °C, whilst maintaining a constant rotation speed of 0.5m/s (surface speed). An increase of temperature (to 35°C) leads to a decrease in IE at a constant rotation speed. Finally, variation of flow speed yields mixed findings: EIS shows an increase in IE, reaching a maximum towards 0.75 m/s and decreasing slightly towards higher speeds (>=0.875 m/s).".
- 01GMVB558B4Z9GJHMPD522HQ30 abstract "The need for faster wireless data is ever-increasing. This puts pressure on the radio spectrum as most sub-6 GHz bands are already congested. Moving to higher bands, in the mmWave and Terahertz (THz) spectrum, unlocks large ranges of bandwidth. At these THz frequencies, however, high-gain directive beams are needed to compensate the higher path loss. Phased Antenna Arrays (PAA) provide this gain but need efficient beamforming feeding networks. In contrast to all-electrical approaches, an optical beamforming network (OBFN) is able to scale to large planar arrays. This is thanks to its small-footprint -- as the radio signals are modulated on an optical carrier with a much smaller waveguide - and low-loss waveguides in Silicon Photonic (SiPh) platforms - feeding every individual antenna element. Generating THz signals is then only limited by the efficiency of the photomixer, i.e. a photodetector that converts the beat note of two optical signals to a THz signal. To this end, we have integrated a photodiode with a bandwidth of 155 GHz on a Silicon Nitride (SiN) waveguide and demonstrated its capabilities in a back-to-back THz link at 280 GHz. Doing so, we achieved data rates beyond 100 Gbit/s.".
- 01GMVB558FANGW4XG5GXBZ2FQY abstract "The increasing demand for renewable energy, necessitates offshore wind turbine structures to be installed in deeper waters and more remote areas. They are subjected to challenging conditions, i.e., the combination of cyclic loads from wind, waves, and currents, and the corrosive nature of the seawater environment. The literature lacks experimental data on the corrosion-fatigue of structural steels, especially concerning the short crack propagation regime. This study involves developing test methods and equipment for corrosion-fatigue testing of welded structural steel and quantifying the propagation rate of short cracks (starting from corrosion pits). The developed models (calibrated by the experiments) allow accurate prediction of the remaining lifetime of steel structures exposed to corrosion damage.".
- 01GMVB5A53DSG9AF661AK3VF4R abstract "This work scrutinizes the PhD research conducted on the changing role of the designers and co-design education through the lens of sustainability. As emphasized by various researchers, the designers' role is less about `designing for' but more about `designing with' users [1, 2, 3]. This shift is caused mainly by the clear understanding of the fact that the active involvement of the users in design processes may lead to real innovation. However, it is seen that how to better facilitate the co-design sessions has not been integrated into the design education curricula [4]. Thus, this PhD research aims to develop a game-based toolkit (`Co-design Facilitator's Game') that would inform, motivate and/or guide design students in the facilitation of the co-design sessions. In this poster, the topic itself and the tangible outcome of this PhD research, which is a phygital game-based toolkit, have been put under the microscope in relation to sustainability. During this PhD research, several sustainability issues have been encountered and strategies have been developed to cope with them. On the technocentric level, the first coping strategy was `dematerialization', which is a crucial concept for environmental sustainability that aims to reduce the use of natural sources to prevent waste [5]. On the human-centric level, the participants' contribution is recognized and attributed through small gifts and certificates, being one of the most essential principles of design justice [6]. The `Co-design Facilitator's Game' can be still improved in relation to sustainability. First, to be more inclusive, multiple senses will be engaged in this toolkit. Moreover, in order to break the linguistic hegemony, the game-based toolkit will be designed to allow transferability and adaptability to several contexts. Lastly, the research-through-design process of this game-based toolkit will continue with the active involvement of design students, designers, and design educators through co-design sessions.".
- 01GMVB5A5912G3TYNZNTGD0P2W abstract "3D concrete printing has the potential to replace shotcrete to construct tunnel linings in hard rock tunnels. The shear strength of the interface between rock and printed concrete is vital, especially at super-young ages. However, traditional methods for testing the shear strength of the interface, e.g., direct shear test, are time-consuming and result in a high variability for fast-hardening printed concrete. A fast bond shear test was newly proposed to test the shear strength of the interface. Each test can be completed in one minute, with another two minutes for preparing the next test. With the 3-minute test, the influence of the matrix composition, the age of printed matrices, and the interface roughness of rock substrate on the shear strength of the interface can be studied.".
- 01GMVB5A5DJ1TZ165QADKXYKZA abstract "Plastics, or polymers, are low-cost materials that show good corrosion resistance, fair strength and low weight. This would make polymers excellent replacement materials for the heavy, expensive and corrosion sensitive metals used in most heat exchangers. This could result in reduced weight of vehicles improving the fuel economy, better functioning desalination plants for providing cheaper potable water and plenty of other possibilities. The only problem here is that polymers are thermal insulators. This problem can be solved by adding heat conducting materials as a filler in the polymer matrix, thus making a composite. This poster covers the basics of thermal conductivity in polymer composites.".
- 01GMVB5F1VJSJJ5JQ9GCPHZYSN abstract "Topology optimization is a versatile mathematical method that can be employed to improve structural efficiency of building components. These structures are designed to withstand a variety of loads, of which thermal loading due to fire could cause severe damage and lead to structural failure. While fire exposure is taken into account in common design codes, it is largely neglected in existing topology optimization procedures. Therefore, in this study the effect of thermal loading due to fire exposure is implemented. A weakly coupled thermo-mechanical finite element analysis governed by thermal expansion and transient heat conduction is considered. Results show that the thermo-mechanical load ratio and the duration of the thermal action have significant influence on the optimized topologies.".
- 01GMVB5F20RMEQAXBK9AZR128Q abstract "Highly dynamical systems are an integral part of tomorrow's society, ranging from robots, over floating wind turbines to drones. The conventional approach to the development of these systems relies on the V-diagram, in which we move from concept to product over an iterative loop of design, implementation and testing. Up until recently a sequential approach has been pursued during the design phase, where first the design is optimized statically after which its functionality is enhanced by optimizing its control trajectory. This impedes finding systems with concurrent optimal design and trajectory. To address this, co-design methods have appeared that do this simultaneously. Up until now, co-design has only been applied to low-fidelity models, that are cheap to evaluate but typically lack the ability to correctly represent reality and lead to many testing cycles. In this work we push model-based system design further, pursuing a `first time right' paradigm, through the inclusion of high-fidelity models (such as for example computational fluid dynamics). To account for the computational cost of these models, we introduce Bayesian optimization, leading to Bayesian co-design. To validate the new methodology the model-based design of a drone was performed with the objective of performing inspection services for wind turbine parks. The technological progress that was realized does not restrict itself to drones, but is applicable to a broader scope of highly dynamical systems. As such it contributing to breaking through the current barriers of several fields to accelerate the transition towards a better future.".
- 01GMVB5F24SH89967RWJHMDHDG abstract "Simultaneous Localization And Mapping (SLAM) is the task of constructing a map of an unknown environment while simultaneously maintaining an estimate of the agent's localization within this map. SLAM exists in many variants and is used in many different contexts. Dense 3D SLAM is one of the most important variants, which aims to create dense 3D maps that can be used for applications in fields such as civil engineering, security, virtual reality or autonomous robotics. The current standard method for dense 3D SLAM uses scanning Light Detection And Ranging (LiDAR) devices and Inertial Measurement Units (IMU), typically mounted on wheeled rovers for increased stability. As a whole, this setup allows a great mapping accuracy but is heavy, hard to navigate in non-flat environments, power-hungry and expensive. In this work, we aim to perform dense 3D SLAM with lightweight, portable sensors. More specifically, we use a specific type of camera that perceives 3D information with the same principle as LiDAR devices but in a miniaturized, less power-hungry manner. This allows the camera to be carried by hand or worn (on a helmet, for example) but comes at the cost of a decreased sensing accuracy. Moreover, carrying the camera by hand or wearing it on a helmet instead of setting it up on a wheeled rover makes it much less stable, making it even harder to estimate in the localization step of the problem. Therefore, most of our efforts go into the adaptation of existing LiDAR-inertial SLAM methods for noisier measurement with a more complex motion model. To do so, we develop a probabilistic SLAM framework based on the Error-State Iterative Kalman Filter (ESIKF) and implement various techniques to make the execution faster and less error-prone. With our experiments, we show that our method achieves competitive accuracy in different environments.".
- 01GMVB5KYN53Q6710XYEQNSR50 abstract "This research is situated in the design of integrated circuits (ICs). ICs are virtually everywhere. They are physical implementations of digital circuits that perform some logical functionality. The steps involved in IC design are twofold. First in the Logic Synthesis step, an abstract representation of the logic functionality is designed in the form of a graph called a netlist. Second, this netlist needs to be further developed into a physically realizable model in a step called Physical Design (Placement and Routing). ICs consist of two main components: logic cells and interconnects. Logic cells are responsible for the data manipulation and interconnects transport the data between the logic cells. Both logic cells and interconnects amount to some delay that ultimately add up to determine the speed of the IC. Back in 1980, most of the delay was accounted for by the logic cells and interconnect delay was negligible. A consequence of this early imbalance is that the netlist is optimized to have as little logic cells as possible. However, as IC technology progressed, the delay in logic cells decreased significantly and the delay in interconnects became more important. Today, the speed of an IC is considerably impacted by lengthy interconnects. Despite this shift in delay causes, the general objective of minimizing the amount of logic cells has not changed. In this poster, we argue that the netlist should be optimized for different, yet unknown, objectives. This is done by giving a simple example that shows a discrepancy between minimizing the amount of logic cells and optimizing speed.".
- 01GMVB5KYTGF10WXG4PWXQV6FS abstract "Belgian Offshore Wind Farms are located in area where tidal current speed is significant. Therefore, it is important to understand the interaction between waves, current, and the structure to assess wave field behaviour around the monopile. In order to properly assess the wave field around the monopile, preliminary study is conducted by investigating the wave and current interaction. Generation and absorption of wave-current condition are performed with modified OpenFOAM Waves2Foam library where the current velocity is superimposed on wave velocity. In the simulation region it is noticed that wave-current interaction is occuring. The wave length is found to get shorter in the similar value of theoretical estimation in the opposing current cases. However, the wave height increment is found to be lower than the theoretical estimation. Thus the wave height modification in velocity superposition requires further research investigation".
- 01GMVB5KYYR1NPC1YFNH05KZB8 abstract "Braiding is classic, but complex, structure in textile industry. The multi-layer layout of braids makes it hard to detect the inner yarns of a braided rope is hard to detect while they often bear the main stress. Especially when ropes are used to manoeuvre and regulate sails, they undergo a variety of dynamic stresses. This study focused on the complex problem of (cyclic) bending over sheaves (CBOS), and a full dynamic Finite Element Method (FEM) simulation process of a braided rope bending over a sheave is carried out with commercial software Abaqus/Explicit. FEM is advantageous for this as it enables to capture the complex processes occurring inside the rope to high accuracy, considering the complex structure of a braided rope. A theoretical deduction is performed to determine the braided yarn paths. The external yarns are modelled as 3D rather than beam element types. Under certain assumption and specific boundary conditions, the relative movement between braided yarns (including the interlacing point) is better assessed and estimation of yarn relative displacement obtained. Parametric studies are conducted to better understand the factors that effect the yarn sliding. The ratio between the diameter of the sheave and the rope (D/d), and the braiding angle, are found to be crucial on the relative movement, while the number of yarns and their diameters have little influence. The successful simulation demonstrates the feasibility of modelling and analysing the complex interaction occurring in braided structures using FEM. The calculated estimated displacement can contribute to further investigations such as the friction and heat generation problem in braided ropes. Such method can provide alternatives in predicting lifespan of braided products that cannot be easily inspected.".
- 01GMWEQCPPEC9TK3BEFD8CGSHP abstract "Obstructive sleep apnea is linked to severe health consequences such as hypertension, daytime sleepiness, and cardiovascular disease. Nearly a billion people are estimated to have obstructive sleep apnea with a substantial economic burden. However, the current diagnostic parameter of obstructive sleep apnea, the apnea-hypopnea index, correlates poorly with related comorbidities and symptoms. Obstructive sleep apnea severity is measured by counting respiratory events, while other physiologically relevant consequences are ignored. Furthermore, as the clinical methods for analysing polysomnographic signals are outdated, laborious, and expensive, most patients with obstructive sleep apnea remain undiagnosed. Therefore, more personalised diagnostic approaches are urgently needed. The Sleep Revolution, funded by the European Union's Horizon 2020 Research and Innovation Programme, aims to tackle these shortcomings by developing machine learning tools to better estimate obstructive sleep apnea severity and phenotypes. This allows for improved personalised treatment options, including increased patient participation. Also, implementing these tools will alleviate the costs and increase the availability of sleep studies by decreasing manual scoring labour. Finally, the project aims to design a digital platform that functions as a bridge between researchers, patients, and clinicians, with an electronic sleep diary, objective cognitive tests, and questionnaires in a mobile application. These ambitious goals will be achieved through extensive collaboration between 39 centres, including expertise from sleep medicine, computer science, and industry and by utilising tens of thousands of retrospectively and prospectively collected sleep recordings. With the commitment of the European Sleep Research Society and Assembly of National Sleep Societies, the Sleep Revolution has the unique possibility to create new standardised guidelines for sleep medicine.".
- 01GMWEQCR0C82HYRWZ5EGZQ3NQ abstract "The European Somnologist certification programme was developed by the European Sleep Research Society to improve patient care in sleep medicine by providing an independent evaluation of theoretical and practical knowledge. The examination of eligible experts plays a key role in this procedure. A process was started more than 15 years ago to create the European sleep medicine curriculum, eligibility criteria for certification, and sleep centre accreditation criteria. The process was characterised by interdisciplinary collaboration, consensus, and achieving new solutions. During the past 10 years, experience has been gained by the examination and certification of more than 1000 sleep medicine experts from more than 50 countries. The process has continuously been improved. However, as the programme was designed and administered mainly by medical experts in the field, systematic influence from teaching and pedagogic experts was partially underrepresented. The current critical appraisal pinpoints several missing links in the process - mainly as a missing constructive alignment between learning objectives, learning and teaching activities, and the final assessment. A series of suggestions has been made to further improve the ESRS certification programme.".
- 01GMWEQHMH8VGEHKHCKN2C7FAM abstract "Obstructive sleep apnea (OSA) is characterised by recurring episodes of upper airway obstruction during sleep and the fundamental abnormality reflects the inability of the upper airway dilating muscles to withstand the negative forces generated within the upper airway during inspiration. Factors that result in narrowing of the oropharynx such as abnormal craniofacial anatomy, soft tissue accumulation in the neck, and rostral fluid shift in the recumbent position increase the collapsing forces within the airway. The counteracting forces of upper airway dilating muscles, especially the genioglossus, are negatively influenced by sleep onset, inadequacy of the genioglossus responsiveness, ventilatory instability, especially post arousal, and loop gain. OSA is frequently associated with comorbidities that include metabolic, cardiovascular, renal, pulmonary, and neuropsychiatric, and there is growing evidence of bidirectional relationships between OSA and comorbidity, especially for heart failure, metabolic syndrome, and stroke. A detailed understanding of the complex pathophysiology of OSA encourages the development of therapies targeted at pathophysiological endotypes and facilitates a move towards precision medicine as a potential alternative to continuous positive airway pressure therapy in selected patients.".
- 01GMWEQHMPJZA3EKFK5T28DHYB abstract "It is 50 years ago, in 1972, that the founding conference of the European Sleep Research Society (ESRS) was organised in Basel. Since then the Society has had 13 presidents and a multitude of board members and has organised, among other things, another 24 congresses. At this 50th anniversary, as the 26th ESRS congress is approaching, we have summarised the history of the ESRS. In this review, we provide a background to show why the foundation of a European society was a logical step, and show how, in the course of the past 50 years, the Society changed and grew. We give special attention to some developments that occurred over the years and discuss where the ESRS stands now, and how we foresee its future.".
- 01GMWEVY5MMZ1ABJ68N625E1MW abstract "Background Cervical cancer screening is a cost-effective method responsible for reducing cervical cancer-related mortality by 70% in countries that have achieved high coverage through nationwide screening strategies. However, there are disparities in access to screening. In Ecuador, although cervical cancer is the second most common cancer in women, only 58.4% of women of reproductive age have ever been screened for cervical cancer. Methodology A qualitative study was performed to understand the current barriers to screening and to identify strategies that could increase uptake in Azuay province, Ecuador. Seven focus group discussions (FGDs) were conducted with under-screened women and health professionals (HPs). The FGDs were recorded and transcribed. Content analysis was done using the socio-ecological framework to categorize and analyse the data. Results Overall, 28 women and 27 HPs participated in the study. The two groups perceived different barriers to cervical cancer screening. The HPs considered barriers to be mainly at the policy level (lack of a structured screening plan; lack of health promotion) and the individual level (lack of risk perception; personal beliefs). The women identified barriers mainly at organizational level, such as long waiting times, lack of access to health centres, and inadequate patient-physician communication. Both groups mentioned facilitators at policy level, such as national campaigns promoting cervical cancer screening, and at community and individual level, including health literacy and women's empowerment. Conclusions The women considered access to health services the main barrier to screening, while the HPs identified a lack of investment in screening programmes and cultural patterns at the community level as major obstacles. To take an integrated approach to cervical cancer prevention, the perspectives of both groups should be taken into account. Additionally, new strategies and technologies, such as self-administered human papillomavirus (HPV) testing and community participation, should be implemented to increase access to cervical cancer screening.".
- 01GMWF20KZJGSDCH5GR2HT9AV8 abstract "Background: Poor housing conditions have been associated with increased mortality. Our objective is to investigate the association between housing inequality and increased mortality in Belgium and to estimate the number of deaths that could be prevented if the population of the whole country faced the mortality rates experienced in areas that are least deprived in terms of housing. Methods: We used individual-level mortality data extracted from the National Register in Belgium and relative to deaths that occurred between Jan. 1, 1991, and Dec. 31, 2020. Spatial and time-specific housing deprivation indices (1991, 2001, and 2011) were created at the level of the smallest geographical unit in Belgium, with these units assigned into deciles from the most to the least deprived. We calculated mortality associated with housing inequality as the difference between observed and expected deaths by applying mortality rates of the least deprived decile to other deciles. We also used standard life table calculations to estimate the potential years of life lost due housing inequality. Results: Up to 18.5% (95% CI 17.7-19.3) of all deaths between 1991 and 2020 may be associated with housing inequality, corresponding to 584,875 deaths. Over time, life expectancy at birth increased for the most and least deprived deciles by about 3.5 years. The gap in life expectancy between the two deciles remained high, on average 4.6 years. Life expectancy in Belgium would increase by approximately 3 years if all deciles had the mortality rates of the least deprived decile. Conclusions: Thousands of deaths in Belgium could be avoided if all Belgian neighborhoods had the mortality rates of the least deprived areas in terms of housing. Hotspots of housing inequalities need to be located and targeted with tailored public actions.".
- 01GMWF3HFDWVMJYBTCWEEKK326 abstract "Background: Human papillomavirus (HPV) testing on self-samples represents a great opportunity to increase cervical cancer screening uptake among under-screened women.Methods: A systematic review and meta-analysis on randomised controlled trials (RCTs) were performed to update the evidence on the efficacy of strategies for offering self-sampling kits for HPV testing compared to conventional invitations and to compare different self-sampling invitation scenarios. Four experimental invitational scenarios were considered. Women in the control group were invited for screening according to existing practice: collection of a cervical specimen by a healthcare professional. Random-effects models were used to pool proportions, relative participation rates and absolute participation differences.Results: Thirty-three trials were included. In the intention-to-treat analysis, all self-sampling invitation scenarios were more effective in reaching under-screened women compared to controls. Pooled participation difference (PD) and 95% confidence interval (CI) for experimental vs. control was 13.2% (95% CI = 11.0-15.3%) for mail-to-all, 4.4% (95% CI = 1.2-7.6%) for opt-in, 39.1% (95% CI = 8.4-69.9%) for community mobilisation & outreach and 28.1% (23.5-32.7%) for offer at healthcare service. PD for the comparison opt-in vs. mail-to-all, assessed in nine trials, was -8.2% (95% CI = -10.8 to -5.7%).Discussion: Overall, screening participation was higher among women invited for self-sampling compared to control, regardless of the invitation strategy used. Opt-in strategies were less effective than send-to-all strategies.".
- 01GMWG4SVQ1BYA1RCXYQNT9E66 abstract "The Klebsiella jumbo myophage phi Kp24 displays an unusually complex arrangement of tail fibers interacting with a host cell. In this study, we combine cryo-electron microscopy methods, protein structure prediction methods, molecular simulations, microbiological and machine learning approaches to explore the capsid, tail, and tail fibers of phi Kp24. We determine the structure of the capsid and tail at 4.1 angstrom and 3.0 angstrom resolution. We observe the tail fibers are branched and rearranged dramatically upon cell surface attachment. This complex configuration involves fourteen putative tail fibers with depolymerase activity that provide phi Kp24 with the ability to infect a broad panel of capsular polysaccharide (CPS) types of Klebsiella pneumoniae. Our study provides structural and functional insight into how phi Kp24 adapts to the variable surfaces of capsulated bacterial pathogens, which is useful for the development of phage therapy approaches against pan-drug resistant K. pneumoniae strains.".
- 01GMWG4SWH5YSPW5JT944D4YA5 abstract "Pseudomonas fuscovaginae is the most prominent bacterial sheath rot pathogen, causing sheath brown rot disease in rice. This disease occurs worldwide and it is characterized by typical necrotic lesions on the sheath, as well as a reduction in the number of emitted panicles and filled grains. P. fuscovaginae has been shown to produce syringotoxin and fuscopeptin cyclic lipopeptides (CLPs), which have been linked to pathogenicity. In this study, we investigated the role of P. fuscovaginae UPB0736 CLPs in plant pathogenicity, antifungal activity and swarming motility. To do so, we sequenced the strain to obtain a single-contig genome and we constructed deletion mutants in the biosynthetic gene clusters responsible for the synthesis of CLPs. We show that UPB0736 produces a third CLP of 13 amino acids, now named asplenin, and we link this CLP with the swarming activity of the strain. We could then show that syringotoxin is particularly active against Rhizoctonia solani in vitro. By testing the mutants in planta we investigated the role of both fuscopeptin and syringotoxin in causing sheath rot lesions. We proved that the presence of these two CLPs considerably affected the number of emitted panicles, although their number was still significantly affected in the mutants deficient in both fuscopeptin and syringotoxin. These results reveal the importance of CLPs in P. fuscovaginae pathogenicity, but also suggest that other pathogenicity factors may be involved.".
- 01GMWG4SWP4YY6B6PM9BY8GAFD abstract "This paper aims to determine the prevalence of the use of online neighborhood networks (ONNs) and explores which socio-demographic, socioeconomic status (SES), social integration, and media use determinants predict ONN membership and uses. Drawing on a stratified random sample (Ghent, Belgium), we found that over a third of the population are ONN members and that membership was mainly predicted by socio-demographic characteristics. Also, in contrast to prior research, our results show that ONNs are the local online territory of residents of lower SES, to whom these ONNs are a means of connecting with and capitalizing on neighborhood connections.".
- 01GMWG4SX6R93XQ3E45AD9AXB2 abstract "Air pollutants emitted by ocean-going vessels (OGVs) cause numerous environmental and human health problems. In 2016, the Belgian Coastguard aircraft was equipped with a sniffer sensor to monitor compliance with MARPOL Annex VI Regulation 14. However, the sensor was susceptible to NO and Volatile Organic Compounds (VOCs), which had a negative impact on the measurement uncertainty. The elimination of measurement errors was achieved by modifying the sensor, including among others the addition of a NOx sensor and a custom-designed hydrocarbon kicker. This resulted in a substantial improvement in the measurement quality and uncertainty of the derived Fuel Sulfur Content (FSC). As a direct result of this, the reporting thresholds for non-compliance drastically improved. The data analysis of sampled OGVs showed that compliance levels notably improved between 2019 and 2020 (from 95.9% to 97.3%), coinciding with the implementation of the Global Sulfur Cap. Findings in this study have also demonstrated that OGVs equipped with emission abatement technology (scrubbers) are more susceptible to non-compliance with Regulation 14 of MARPOL Annex VI. Given these results, this article provides an answer to the question of how to monitor effective implementation of NO emissions from OGVs.".
- 01GMWG4YSRQPCKFXZ4R0RXZD1Q abstract "The redistribution of biological (transpiration) and non-biological (interception loss, soil evaporation) fluxes of terrestrial evaporation via atmospheric circulation and precipitation is an important Earth system process. In vegetated ecosystems, transpiration dominates terrestrial evaporation and is thought to be crucial for regional moisture recycling and ecosystem functioning. However, the spatial and temporal variability in the dependency of precipitation on transpiration remains understudied, particularly in sparsely sampled regions like Africa. Here, we investigate how biological and non-biological sources of evaporation in Africa contribute to rainfall over the major watersheds in the continent. Our study is based on simulated atmospheric moisture trajectories derived from the Lagrangian model FLEXPART, driven by 1 degrees resolution reanalysis data over 1981-2016. Using daily satellite-based fractions of transpiration over terrestrial evaporation, we isolate the contribution of vegetation to monthly rainfall. Furthermore, we highlight two watersheds (Congo and Senegal) for which we explore intra- and interannual variability of different precipitation sources, and where we find contrasting patterns of vegetation-sourced precipitation within and between years. Overall, our results show that almost 50% of the annual rainfall in Africa originates from transpiration, although the variability between watersheds is large (5%-68%). We conclude that, considering the current and projected patterns of land use change in Africa, a better understanding of the implications for continental-scale water availability is needed.".
- 01GMWG4YSXXJEBV26SBHA1HKSK abstract "SCAN (Scientific Content Analysis) is a verbal credibility assessment (VCA) tool that claims to detect deception in written statements. Although the validity of SCAN is contested in literature, various (law enforcement) agencies across the globe are trained in using SCAN. To date it remains unknown how SCAN is perceived, and to what extent it is used in practice. Based on a scoping review and qualitative survey, we identified practitioners' and scholars' perceptions on the use(fulness) of SCAN. Data were collected from 48 participants (35 practitioners and 13 scholars). Key findings illuminate (1) that practitioners apply an incomplete, personalized version of SCAN, (2) that SCAN practitioners are reluctant to abandon SCAN, and (3) that SCAN is considered incompatible with (Belgian) legislation on police questioning. Based on practitioners' expressed needs and concerns, we present several alternatives for SCAN, as well as recommendations on how a shift to other techniques can be facilitated.".