Abstract Egypt's educational system, comprising public and private institutions across all levels, faces multiple issues affecting its quality, equity, and relevance. A recent study by Egypt's Ministry of Health identified that 29.8% of high school students experience mental health problems like anxiety, speech defects, depression, stress and tension, emphasizing the significance of addressing such concerns. Therefore, we aimed to conduct a random sample study of 130 students from the STEM and conventional educational systems in Egypt to compare their perceived stress levels. An online Arabic questionnaire was shared with the targeted population over a period of two weeks. The questionnaire included inquiries about academic and demographic information as well as the Perceived Stress Scale (PSS-10). Respondents were asked about their personal information, academic performance, extracurricular activities, average studying hours, and perceived stress levels. The PSS-10 assessed stress levels based on questions covering coping, control, unpredictability, and overload. Statistical analysis was conducted using the Statistical Package for the Social Sciences (SPSS) version 25. Major findings revealed that STEM students suffer from higher stress levels than conventional students, mainly due to fewer hours of sleep. Additionally, significant differences were found in stress levels between male and female students across the sample. These findings underscore the need to address academic pressures and establish appropriate mental health screening in STEM schools to mitigate negative emotional effects. They are crucial in developing effective strategies for minimizing student stress levels, and educational institutions should utilize this data to assess their curriculum and make any necessary changes.
Keywords: Perceived stress scale, STEM education, Conventional education, Sleep hours, ExtracurricularsI. Introduction
As we navigate through the academic jungle, it's no secret that we may encounter academic-related stress, which is an aspect of most Egyptian students' life. A new study published by Egypt's Ministry of Health reveals that 29.8% of high-school students suffer from anxiety, tension, speech defects, or depression disorders. Deadlines, parental and societal pressure, desire for perfection, and poor time management are among the most common causes of stress among high school students. Egypt's educational system is one of the biggest in Africa and the Middle East, comprising both public and private institutions at primary, preparatory, secondary, and tertiary levels. Despite governmental efforts to improve education, however, the system continues to be riddled with several problems that negatively affect overall mental health among students. Egypt's high school system offers diverse programs geared toward career fields, including general, vocational, and technical education. Despite the significant efforts in the field of educational psychology, no previous studies- to the best of our knowledge- have addressed the difference between stress levels among STEM schools students in Egypt and conventional schools, taking into account the learning system, the social interactions, extracurriculars, personal habits, and extra projects and tasks. Therfore, this study aims to compare the prevalence of perceived stress among two major educational systems in Egypt, STEM high schools and conventional high schools, as they differ in academic advancement levels, curriculum, teaching methods, extracurriculars, skills, and knowledge they aim to impart. As well as determining and analyzing possible demographic and academic factors related to stress levels among both populations. The methodology involved conducting an online Arabic survey asking for necessary demographic and academic information as well as the ten questions of the Perceived stress scale. Data were collected from 130 respondents, including males and females, STEM and non-STEM students, various academic levels, and educational grades. Data were analyzed to examine the difference in stress levels between both populations and the significance of association between stress levels and other variables. Ensuring that the study is designed ethically and fairly to benefit students and educational institutions in Egypt, we will ensure that the findings and results of the study can be practically implemented in educational administrations.Hypothese
Null Hypothesis (1): There will be no statistically significant difference between STEM educational system's students and the conventional educational system's students regarding perceived stress levels. Null Hypothesis (2): There will be no statistically significant difference between males and females regarding perceived stress levels. Null Hypothesis (3): There will be no statistically significant difference between different educational grades (Grades 10 & 11 & 12) in terms of perceived stress levels.II. Literature Review
Intensive research and experiments have been conducted to find the relation between academic performance and stress (including psychological, physical, social, and academic stress). Previous research has indicated a significant negative impact of stress on academic performance, which was roughly equal between males and females, ensuring that teachers play a vital role in reducing stress among their studentsIII. Methodology
1. Participants This research employs a mixed online survey that lasted for two sequential weeks starting from the 11th of August, 2023. It was used to obtain data from a random sample of students from both the conventional educational system and STEM educational system in Egypt. 130 anonymous responses were collected from various high schools and localities in Egypt. 2. Measurements 2.1. Demographics: Respondents were asked to fill in their personal information, including birth date, gender, and average sleeping hours. 2.2. Academic Information: Respondents were asked about their academic performance, extracurricular activities during school months, average studying hours, educational system, educational level, as well as their academic grades. 2.3. Perceived Stress Scale ( PSS-10): The perceived stress scale (PSS) is a structured questionnaire used for assessing the level of stress that a population or a sample of it faces during a specific period. The test includes ten questions that cover topics such as coping, control, unpredictability, and overload. Questions such as " In the last month, How often have you felt that you were unable to control the important things in your life? " were asked, and the respondent was required to give an answer on a scale from 0 (Never) to 4 (Very Often). The final score will be the sum of the scores of each question by reversing the scores of questions 4, 5, 7, and 8. Where in these four questions, a score of 4 is actually 0, 3 is 1, 2 is 2, 1 is 3, and 0 is 4. A final score of 0-13 indicates low stress levels, 14-26 indicates moderate stress levels, and 27-40 indicates that the respondent suffers from severe stress. 3. Procedures In order for the data to be collected effectively, a structured online validated Arabic questionnaire of multiple choice questions was done using Google Forms. It was accessible to the targeted populations and sent to them via social media and mail platforms, for instance, WhatsApp and Microsoft Outlook. Daily reminders for filling out the form were sent as well. Participants' responses, which included concise/clear answers for the demographic, academic information, and PSS-10 sections, were recorded. Two weeks later, data were collected and begun to be analyzed. 4. Data collection and instrumentation The collected data represented the score of stress of each individual, as well as other necessary information for testing the hypotheses. All analytic processes were done using the Statistical Package for the Social Sciences (SPSS) version 25 tool. Descriptive statistics tests were done to determine similarities and differences among both populations. Table [1] shows statistical methods used to compare between both populations (STEM & non- STEM). Table [2] > shows statistical methods used to determine the relation between different demographic/ academic variables and stress scores.Variable | Statistical Test | Variable | Statistical Test |
Age | Independent sample t-test | Stress Score | Independent sample t-test |
Educational Level | Chi-Square test | Sleeping Hours | Chi-Square test |
Gender | Chi-Square Test | Sudying hours | Chi-Square test |
Number of extracurriculars | Mann-Whitney | Categories of extracurriculars | Mann-Whitney test |
Academic performance | Chi-Square test |
Relations | Dependent Variable | Independent Variable | Statistical Test |
Gender * Stress Score | Stress Score |
Gender (Males & Females) |
Independent sample t-test |
Educational level * Stress score | Stress Score |
Educational Level (Grades 10 & 11 & 12) |
Oen Way ANOVA |
Sleeping hours * Stress score | Stress Score |
Sleeping hours (< 4) & (4-8) & (> 8) |
Oen Way ANOVA Post Hoc tests — Tamhane test |
Studyinh hours * Stress score | Stress Score |
Sleeping hours (< 6) & (6-12) & (> 12) |
Oen Way ANOVA |
Categories of extracurriculars * Stress score | Stress Score | Number of difference categories of extracurriculars (1-7) | Spearman's correlation coefficient test |
Number of extracurriculars * Stress score | Stress Score | Number of extracurriculars | Spearman's correlation coefficient test |
Educational system * Stress score | Stress Score |
Educational system (STEM & non-STEM) |
Independent sample t-test |
IV. Results
1. Demographic Analysis A total of 130 responses were collected from conducting the online survey. Respondents included 83 females (64%) and 47 males (36%). STEM students represented 61% (79 students), while conventional (non- STEM) students represented 39% of respondents (51 students). The academic performance of the majority of respondents (92students) ranged between 80-89%, 34 students ranged between 90-100%, while only 4 students were between 70-79%. Most of the respondents (92 students, 71%) aged between 15- and 17-years Fig. (1). 69% of respondents (90 students) were in grade 11, 20% (26 students) were in grade 12, while 11% (14 students) were in the 10^th^ grade. Average sleeping hours for most of the students (116 students, 89%) ranged between 4-8 hours/day, while average studying hours for the majority of students (73 students, 56%) was less than 6 hours/ day Fig. (1). Table [3] below indicates that there was no statistically significant difference between age, gender, educational grade, and studying hours between both populations. However, there was a statistically significant difference between academic performance, sleeping hours, number of extracurricular activities, and the variety of their categories.STEM | Non-STEM | P value * | |
---|---|---|---|
Age (mean ± SD) | 16 ± 0.925 | 16 ± 0.947 | 0.979 |
Gender (M/F) | 11/54/14 | 3/36/12 | 0.300 |
Academic performance
|
10 69 0 |
24 23 4 |
< 0.001 |
Study hours
|
38 40 1 |
35 16 0 |
0.060 |
Sleep hours
|
3 75 1 |
2 41 8 |
0.007 |
Activity Categories
|
1
8 5 14 18 13 10 10 |
6
18 11 7 4 2 1 2 |
< 0.001 |
Number of activities (mean rank) | 76.65 | 46.59 | < 0.001 |
Question Number (Count, Percent) | 0 (Never) | 1 (Almost Never) | 2 (Sometimes) | 3 (Fairly Often) | 4 (Very Often) |
---|---|---|---|---|---|
Question 1 |
5 (3.8%) |
11 (8.5%) |
45 (34.6%) |
46 (35.4%) |
23 (17.7%) |
Question 2 |
5 (3.8%) |
14 (10.8%) |
29 (22.3%) |
53 (40.8%) |
29 (22.3%) |
Question 3 |
2 (1.5%) |
6 (4.6%) |
20 (15.4%) |
45 (34.6%) |
57 (43.8) |
Question 4 |
9 (6.9%) |
24 (18.5%) |
49 (27.7%) |
28 (21.5%) |
20 (15.4%) |
Question 5 |
14 (10.8%) |
51 (39.2%) |
45 (34.6%) |
17 (13.1%) |
3 (2.3%) |
Question 6 |
4 (3.1%) |
14 (10.8%) |
32 (24.6%) |
57 (43.8%) |
23 (17.7%) |
Question 7 |
17 (13.1%) |
32 (24.6%) |
55 (42.3%) |
19 (14.6%) |
7 (5.4%) |
Question 8 |
19 (14.6%) |
51 (39.2%) |
37 (28.5%) |
16 (12.3%) |
7 (5.4%) |
Question 9 |
4 (3.1%) |
12 (9.2%) |
19 (14.6%) |
50 (46.2%) |
35 (26.9%) |
Question 10 |
5 (3.8%) |
11 (8.5%) |
30 (23.1%) |
36 (27.7%) |
48 (36.9%) |
Stress Score Mean | Standard Deviation | P value | Significance of Association with Stress Scores | |
---|---|---|---|---|
Educational System
|
26.4935 24.3000 |
5.69770 6.51920 |
.047 | Statistically significant |
Gender
|
27.0370 23.1522 |
5.69527 6.06984 |
< .001 | Statistically significant |
Educational Grade
|
23.6429 25.5455 27.0400 |
5.59680 6.34254 5.33448 |
.244 | Statistically significant |
Studying Hours
|
5.3857 25.9286 |
6.42976 5.77410 |
.884 | Statistically significant |
Sleeping Hours
|
30.0000 25.7965 21.1111 |
3.67423 6.05209 5.66667 |
.021 | Statistically significant |
Mean Square | df | |||
Grade (Among STEM population) | 26.697 | 2 | .445 | Statistically significant |
Grade (Among non- STEM population) | 78.644 | 2 | .158 | Statistically significant |
Sleep Hours | Sleep Hours | Mean Difference | Sig. | |
Tamhane | < 4 hours | 4-8 hours | 4.20354 | .170 |
> 8 hours | 8.88889* | .013 | ||
4-8 hours | < 4 hours | -4.20354 | .170 | |
> 8 hours | 4.68535 | .116 | ||
> 8 hours | < 4 hours | -8.88889* | .013 | |
4-8 hours | -4.68535 | .116 |
Spearman's rho' | P -value | Significance of Association with Stress Scores | |
---|---|---|---|
Categories of extracurricular activities * Stress Score | .038 | .674 | Statistically insignificant |
Number of extracurriculars * Stress scores | .036 | .686 | Statistically insignificant |
V. Discussion
This current study aims to compare stress levels between STEM and Conventional students in Egypt and identify the significance of association of certain demographic and academic factors with stress levels among both populations. Null Hypothesis (1): There will be no statistically significant difference between STEM students and conventional students in terms of perceived stress levels. According to the findings of this study represented in table [5], this hypothesis was rejected as there was a statistically significant difference between means of stress levels between both populations (p = .047). This is closely related to the nature of STEM schools in Egypt as being boarding schools for excellent students and associated with higher levels of competition and extracurriculars than conventional schools - table [3] . Previous studies have addressed the effect of being educated in a boarding school on stress levels and social interactions. Children sent to boarding schools tend to suffer from sudden and often irrevocable traumas, as well as bullying and sexual abuseIV. Conclusion
This study was the first of its kind aimig to assess the variation in the prevalence of perceived stress between two types of educational systems in Egypt: STEM and conventional secondary systems. Taking into consideration social, academic and extracurricular factors, and utilizing a significant number of statistical tests for data analysis. The sample (130 high school students) was chosen randomly, and respondents were asked to fill in an online Arabic survey to identify specific demographic information and determine stress score through the perceived stress scale -- 10. The analytic tests done on the results revealed that STEM students are subjected to higher levels of perceived stress than conventional students. Lack of strong parental relationship, excess workload, and intensive competition contribute to this significant difference in stress levels. However, the most important factor was the lack of quality and the reduced quantity of sleep hours, which may be a result of over participating in extracurriculars. Additionally, we found that there was a significant difference between the two genders in terms of stress levels. Upon analyzing the influence of these factors on the overall mental health of students, addressing them and establishing appropriate mental health screening in STEM schools is an essential step that may aid in developing the creativity, innovation, and enthusiasm of students and reducing negative emotional and mental problems that may affect academic or social performance among students.V. References
[1]
Abstract SCD is a serious inherited hemoglobinopathy that was responsible for the mortality of 376000 patients in 2021. The number of infants born with this genetic disorder was raised by 13.7% within the time interval from 2000 to 2021. Its fetal complications and painful VOC episodes are associated with a reduced quality of life and hospitalization and healthcare burden. Most SCD therapies are symptom-managing focused rather than curing the disease itself. This study mainly focused on the most life-threatening complications including cardiovascular complications and acute splenic sequestration, medications managing or curing them including hydroxyurea (HU) and gene therapy. HU was found to be effective in reducing cell sickling and was associated with improvements in organ functions represented in decreased acute chest syndromes and crises requiring blood transfusion. Its drawbacks were obvious in reduced sperm count and restricting erythroid cells' growth. Curing SCD gives lentiviral Gene therapy an advantage compared to HU. However, more research should be done on developing gene therapy to find out the reason and solution of malignancies reported in some cases.
I. Introduction
The disease of sickle cell anemia (SCD) was common in Africa for about 5000 years. A story of researching, discovering complications, and finding out medications, was written by great scientists, like chemist Dr. Linus Carl Pauling, Dr. Ingram, and others, to make an evolution from calling patients "ogbanjes" in Africa to the use of gene therapy techniques to treat itII. Hemoglobins
The human hemoglobin (Hb) is the factor that controls how the human body will be. Characteristics like length, eye color, and others have resulted from its function. There are many types of hemoglobin in humans. Some of them are healthy and normal and others are abnormal and can cause many disordersIII. Sickle Cell Anemia
i. Sickle cell anemia history
Sickle cell anemia was common in Africa for about 5000 years but that was not recorded wellii. Pathophysiology of SCD
The pathophysiology of sickle cell anemia starts with the arising of Sickle cell hemoglobin (HbS). That happens because of the mutation in the hemoglobin (Hb) beta chain, as at the 6^th^ position, it has the amino acid valine, while the normal one is glutamate. This glutamine-to-valine substitution occurs as a result of an original mutation when the 6^th^ codon in the beta- globin chain has thymine instead of adenineIV. Complications
i-insight
The slender biconcave shape of RBCs gives them the ability to gather forming rouleaux, preventing individual RBCs from clumping together in micro- blood vessels to avoid Vaso-occlusion. Furthermore, this slender shape results in a noticeable strength and elasticity obvious when they squeeze through distorted narrow blood capillariesii-Cardiovascular complications
iii- Acute Splenic Sequestration Crisis (ASSC)
Acute Splenic Sequestration (ASS) is one of the most common complications of sickle cell anemia. Researchers define it as a drop in hemoglobin level with no less than 20%. This drop is related to an enlargement in spleen size by 2 cm at least compared with the patient's normal conditionsV. Medications
i-Hydroxyurea
Hydroxyurea (HU) is the first U.S. Food and Drug Administration FDA-approved medication for sickle cell disease. By inhibiting the ribonucleotide reductase enzyme, this cytostatic agent drains the deoxyribonucleotide reserves within the cells (used in DNA synthesis and repair). Also, this NO- releasing drug demonstrates a continuous inhibition of erythroid cell growth (reaching 20-40% within 6 days) and erythroleukemic K562 cell growth (reaching 65% within 2 days)ii- Lentiviral Gene therapy
Contrary to the other SCD symptom-focused therapies, Gene therapy works to cure the disease itself. Unprecedented advances in genomic sequencing made the way for discoveries of new molecular tools for genome modification which made gene therapy a promising medication for SCD. Though bone marrow transplantation and allogenic blood are known to cure SCD, donor availability restrictions (recent studies imported that less than 25% of patients found a suitable intrafamilial donor) and graft-versus-host disease resemble significant drawbacks when they are compared to gene therapyVIII. Conclusion
HbS is the main cause of SCD. Reducing HbS and/or its effects will manage SCD riskiness. It was found that blood transfusions can lead to many side effects making it not recommended by researchers in treating SCD. Hydroxyurea is generally used to manage the complications and can't prevent all the disease. In addition, hydroxyurea can increase different disease complications while treating a specific complication. So, it was concluded that the smallest functional dose of Hu is highly recommended than overusing it. On the other hand, gene therapy techniques can treat the disease itself and can help humanity end the crisis of sickle cell anemia. In addition, it is not a chronic treatment, and patients will not need hospital visits after extracting enough hematopoietic stem cells to be gene- modified. Another key conclusion in gene therapy treatment is to use lentiviral, not retroviral, vectors to increase the level of treatment in the therapy. So, the research question was answered by the fact that lentiviral gene therapy has a good potential to save SCD patients. It is preferred to do future research to discover the hidden opportunities in lentiviral gene therapy to help poor patients as it is expensive nowadays. Hence, it may not be available in places like Africa where the disease spreads and the health care levels are not high to enhance such a technique. That is noticed that economic aspects were not focused on in this paper.IX. References
Abstract The new and emerging field of quantum computing harnesses the understanding of the complex dynamics of quantum systems, promising to advance and revolutionize scientific fields that can be applied in our world. However, realizing this potential currently faces its challenges: scaling error-corrected qubits, parameter spaces, and efficiently compiling quantum circuits given hardware constraints. This paper reviews techniques to address these obstacles by integrating automatically differentiable quantum circuits (ADQCs) with tensor networks (TN) to enable reverse-mode automatic differentiation for efficient optimization. We propose applying this ADQC-TN framework to the 2D Hubbard model, which is a foundational model of strongly correlated electron systems exhibiting rich phase diagrams, including unconventional superconductivity. This framework can elucidate detailed mechanisms, such as how dopant atoms influence superconducting electron pairing, by training the model's hopping, interaction, chemical potential, and other parameters on experimental measurements. Robust optimization represents a pivotal bridge between quantum computing and experimental condensed matter physics to advance quantum-based materials modeling and discovery, which has already seen success from the extension of the ground state of quantum lattice models with low fidelities. Successfully trained Hubbard models could facilitate the analysis and understanding of the effects of parameter-tuning and the potential of defects on superconductivity that can aid in other future modeling discoveries through bridging the gap between quantum computing and physical engineering.
I. Introduction
When John Dalton first discovered the atom, a whole quantum world was waiting to be unveiled, a world where the principles of reality break down and what governs is beyond the naked eye. Based on these quantum mechanics, Feynman and other scientists envisioned using quantum computers to simulate quantum systems under the premise that an initial quantum state can be unitarily evolved with a quantum computer that is polynomial in size and evolution timeII. Quantum Physics
A. Schrödinger's Equation
In simulating quantum mechanics, we are first interested in the solution of the time-dependent Schrodinger equation, $$ j\hbar \; \frac{\partial}{\partial t} |\; \psi(t)\rangle \; = \hat{H}|\psi(t)\rangle $$ where $H = -\frac{\hslash^2}{2m} \nabla^2 + V$ represents the Hamiltonian or the total energy of the wavefunction $\psi$ This can be further thought of as $\psi (t) = e^{-iHt} \psi (0)$ propagates the initial state $\psi (0)$ ‘s evolution through time. The Schrodinger’s equation is a useful representation of quantum dynamics because it fundamentally describes how quantum systems evolve and their Eigen states, respectively. Additionally, it takes a time-independent form $\hat{H} \psi = \hat{E} \psi$, but we will be focusing mainly on the time-dependent form and the applications resulting from it.B. The Hamiltonian Approximation
The first idea is to approximate $\psi (t+ \Delta t)$ as $(I - iH\Delta t) \psi (t)$ like classical conventions, but, however, it is not satisfactory enough. We must turn to using the operator $e^{-i} \: \Delta t$ for $\psi (t + \Delta t) = e^{-iH \Delta t} \psi (t)$ , under a sufficiently small-time step ∆𝑡 . For local Hamiltonians like Ising and Hubbard models, we can efficiently simulate by decomposing into smaller subsystems such that their complexity is $O(polyN)$ , since applying $e^{-iHt}$ directly is expensive. For Hamiltonians like these as sub-Hamiltonians, it is first formulated by $H = \sum^b_a H_a .$ Then, we can apply the Trotter-Suzuki to approximate and decompose the time evolution operator, $$ e^{-iHt} =\; e^{-it} =\; \lim_{n \to \infty} \Biggl(\prod_{a}^{b}e^{-iHa\frac{t}{n}}\Biggl)^n $$ This simplifies to, $$ \Biggl(\prod_{a}^{b} e^{-iHa\frac{t}{n}}\Biggl)^n \; + \; O \: \Biggl(\frac{b^2 t^2}{n}\Biggl) $$ The full Hamiltonian is decomposed into $b$ local terms $H_a$ with $e^{-iH_a t}$ representing its approximate evolution under the error $O (\frac{b^2 t^2}{n})$ For $n$ Trotter steps, there is a tradeoff to balance between accuracy and efficiency because the error increases for more terms that do not commute $H{a_{1}} H{a_{2}} \ne H{a_{2}} H{a_{1}}$ under the time evolution $e^{-iH_a t} .$ So, in this first order error approximation, when $b$ increases, the error increases as slowly as $n$ time steps is also increased respectively. This means that at the cost of more matrix multiplication, the accuracy improves. Let’s look at the Hubbard model as a simulation exampleC. The Hubbard Model
The Hubbard model describes quantum tunneling as shown in Fig. 2 between neighboring lattice sites and on-site interaction between two fermions of opposite spinIII. Quantum Computing
A. Jordan Wigner-Mapping
To utilize quantum computing, we must transform the Hamiltonian into a set of operations accessible or understandable to quantum computers. Namely, we can use the simple Jordan Wigner transformation for second quantized Hamiltonians to map occupations to qubit orientations, simulating fermionic systems on qubits and gates by using Pauli-$X$ and -$Y$ to satisfy $a^{\dagger}_{i} | 0 \rangle_{i} = 0, a_{i} | 1 \rangle_{i} = 0,$ and $a^{\dagger}_{i} | 1 \rangle_i = 0,$ $$ a^{\dagger}_{i} = \frac{X_i - i Y_i}{2}, a_i = \frac{X_i + i Y_i}{2} . $$ For this mapping to capture the antisymmetric characteristics of fermions $a^{\dagger}_{i} a_j = -a^{\dagger}_{j} a_i,$ we must intersperse Pauli-$Z$ to remedy the fact that Pauli Operators do not commute for $XY \ne -YX,$ but rather for $XZ = -ZX$ and $YZ = -ZY$: $$ \displaylines{ a^{\dagger}_{1} = \frac{X_1 - i Y_1}{2} \; \Huge\oplus \normalsize 1 \Huge\oplus \normalsize 1 \; ... \Huge\oplus \normalsize 1 \\ a^{\dagger}_{2} = Z_1 \Huge\oplus \normalsize \frac{X_2 - i Y_2}{2} \; \Huge\oplus \normalsize 1 \Huge\oplus \normalsize 1 \; ... \Huge\oplus \normalsize 1 \\ a^{\dagger}_{n} = Z_1 \Huge\oplus \normalsize Z_2 \Huge\oplus \normalsize ... \Huge\oplus \normalsize \frac{X_n - Y_n}{2} . } $$ We get, $$ X_i Y_i = i Z_i \newline \therefore n_i = a^{\dagger}_i a_i = \frac{1-Z_i}{2} . $$ Over the Jordan-Wigner transformation, the Hubbard Hamiltonian in equation (2.3.2) becomes $$ \displaylines{ H = -\frac{t}{2} \sum_{\langle i,j \rangle} Z_{j+1:i-1} (X_i X_j + Y_i Y_j) + \\ \frac{U}{4} \sum_i (1-Z_i^{\uparrow})(1-Z_i^{\uparrow}) - \frac{\mu}{2} \sum_{i, \sigma} (1-Z_i^{\sigma}) . } $$ Other mappings include the Bravyi-KitaevB. Tensor Network Optimization
Now that we can convert our Hamiltonian into quantum computing language, we must face the fundamental problem of optimizing a quantum circuit: efficiently exploiting parameters. We can solve this through differentiable programming given hardware constraints by minimizing the loss function that encodes the problem we want to solve, $$ \alpha_{t+1} = \alpha_t \; - \eta \nabla_\alpha \mathcal{L} (\alpha, t) $$ where $\alpha$ are the parameters, $\eta$ the learning rate, and $\nabla \mathcal{L}$ the gradient operator on the loss function. This loss function can be largely responsible for the feedback on how well a quantum circuit is performing given a set of parameters, updating it each time for optimization, and is commonly used in machine learning. We can use tensor networks (TN), useful mathematical, graphical representations of multi-dimensional arrays that can store information through tensor nodes in a computation graph. We can significantly enhance optimization by exploiting automatic differentiation in the tensor computation graph. This technique of backpropagation exploits the chain rule of a partial differential to propagate the gradient back from the network output and calculate the gradient of the weight respectivelyC. Automatically Differentiable Quantum Circuits Tensor Networks (ADQC-TN)
Now that we have discussed optimization, how can we leverage these tensor networks and automatically differentiate them in quantum circuits? We have the tools of TN to simulate quantum models with ample Hilbert space through the backpropagation technique to reduce complexity, but what do we do with them? The simple answer is reframing our thinking to apply to quantum circuits. Remember our goal is to simulate a model onto quantum circuits, $$ |\psi_{tar}=I(\alpha)|\psi_{evol}\rangle $$ where $| \psi_{tar} \rangle$ is the target state evolved from operation $U(\alpha)$ on $| \psi_{evol} \rangle$ state. We want to minimize the error between the target and evolved state, $$ F=-\frac{1}{N}\ln | \langle \psi_{tar} | U(\alpha)|\psi_{evol}\rangle| $$ where $F$ is the negative logarithmic fidelity that quantifies the closeness between the $| \psi_{tar} \rangle$ and $| \psi_{evol} \rangle$ states over operation $U(\alpha)$ . Therefore, we just need to find a particular set of unitary gates or operations on $\delta^*$ parameters that minimizes $F$ , our respective loss function. This can be done by updating the gates towards the opposite direction of their gradients and integrated with TNIV. Applications
A. Variational Quantum Eigensolver (VQE)
One current, powerful application is the Variational Quantum Eigensolver (VQE)B. Modeling
The advent of Automatically Differentiable Quantum Circuits (ADQCs) allows for preparing many-qubit target quantum states by optimizing gates through differentiable programming via backpropagation. ADQCs introduce unconstrained, differentiable latent gates projected to unitary gates satisfying quantum constraints, and therefore optimizing these latent gates layer-by-layer yields efficient state preparation using ADQCs, obtains low fidelities, and can reduce matrix product state (MPS) representations with a compression ratio of $r \sim O(10^{-3})$V. Future Research
There are two main sectors of future research from ADQC-TN optimization on models: other physical models and applicable optimization. Future research into non-local Hamiltonians and other respective techniques and approximations extends to models beyond the 2D Hubbard model. Similarly, higher dimensions and more complex, accurate models are good starting points. Having a framework to see which methods apply to what, so that it can be tailored for real applications, would be branching out. The frame-work proposed for optimizing parameters in this paper relies mainly on quantum computing software. The work in more advanced and enhanced libraries and programing could determine usability, automation, utilization, and integration into real-world systems. New loss functions, flexible architectures, and alternative quiet mappings, in combination with more methodological and pragmatic engineering fronts, will aid in the discovery and application of accurate, performant quantum models. Furthermore, future research can also be seen in real-world applications of the techniques discussed in this paper. The integration of machine learning methods with automatic differentiation and tensor networks using quantum circuits can help in model tuning, complex algorithms, and optimization in the fields of artificial intelligence and quantum simulation. This can already be seen inVI. Discussion
Further technical examination of this ADQC-TN approach can reveal several aspects of the modeled Hamiltonian and optimization as a field. Firstly, alternative transformations such as the Bravyi-Kitaev algorithm offer potential for more efficient reductions that could reduce overall circuit depth, and more detailed analysis quantitatively comparing circuit complexity under different fermionic qubit mappings could in advantageous transformations for a given model. Secondly, this tensor network architecture design space is extensive and dependent heavily on depth, entanglement, connectivity, and the desired model. Architectures balancing expressiveness and trainability may be particularly well-suited, but systematic evaluation of tensor network configurations' usage will be essential in creating performant models applicable beyond physical domains. Thirdly, while mathematically convenient, reliance on fidelity as the sole optimization metric is not ideal and is best alongside incorporating alternative domain-specific loss functions for training towards precise experimental objectives under a given objective. Fourthly, while simplified, differentiable programming is still in its infancy and requires more development on its widespread software, meaning modeling difficulty and complexity will depend similarly on the software used. This paper lays a framework for optimizing the 2-D Hubbard Hamiltonian model implementable in quantum programming languages. For other models alike, it can be done in three simplistic steps: This paper lays out a framework for optimizing the 2-D Hubbard Hamiltonian model implementable in quantum programming languages. For other models alike, it can be done in three simplistic steps:VII. Conclusion
Pursuing a simplified framework in leveraging optimization and programming techniques that can be used to transition from theoretical quantum physics to applied quantum computing is essential in further understanding the promising use of new quantum simulation methods and integration of quantum hybrid methods over just classical simulations. This paper has proposed a framework for optimizing parameters of the 2D Hubbard model by integrating automatically differentiable quantum circuits with tensor networks. We reviewed techniques for gradient- based optimization via backpropagation through ADQCs to enable efficient tuning of quantum circuits. The Hubbard model was transformed into qubit operators using the Jordan-Wigner transformation to implement on a quantum circuit. By minimizing the fidelity error between target and evolved states, the ADQC-TN framework can optimize model parameters to match experimental measurements. The proposed ADQC-TN optimization framework for simulating the 2D Hubbard model offers a path forward despite hardware constraints. As quantum computing matures, techniques like differentiable programming will help implement precise quantum models to revolutionize our understanding of complex quantum materials. New research into complex quantum systems and their applications in quantum computing is necessary for the diversity of software and paradigms. Techniques alike leverage what is already known but make it better and more applicable to reduce problems from new growing demands like computational complexity costly simulations, and is scalable, to name a few. By outlining a methodology of the potential application of automatic differentiation using tensor networks in models like the Hubbard model, this paper summarizes the problems faced and current, novel solutions to it. This can go beyond simulating just local models and be more applicable to other parameters in increasing complexity. As the times of our technology try to catch up, looking from a different perspective and way of thinking will help in the endeavor to make sense of and apply these innovations despite the limitations of our time.VIII. References
D. Wecker, M. B. Hastings, N. Wiebe, B. K. Clark, C. Nayak, and M. Troyer, "Solving strongly correlated electron models on a quantum computer," Phys. Rev. A, vol. 92, no. 6, pp. 062318, Dec. 2015, doi: 10.1103/PhysRevA.92.062318.
C. W. Bauer et al., "Quantum Simulation for High Energy Physics," arXiv, Apr. 07, 2022, doi: 10.48550/arXiv.2204.03381.
J. Preskill, "Quantum Computing in the NISQ era and beyond," Quantum, vol. 2, pp. 79, Aug. 2018, doi: 10.22331/q-2018-08-06-79.
M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, "Elucidating reaction mechanisms on quantum computers," Proc. Natl. Acad. Sci. U.S.A., vol. 114, no. 29, pp. 7555—7560, Jul. 2017, doi: 10.1073/pnas.1619152114.
J. Fraxanet, T. Salamon, and M. Lewenstein, "The Coming Decades of Quantum Simulation," 2022, doi: 10.48550/ARXIV.2204.08905.
"Quantum Optics and Quantum Many-Body Systems." Quantum Computing. Available: https://qoqms.phys.strath.ac.uk/index.html. Accessed: August 24, 2023.
C. W. Bauer et al., "Quantum Simulation for High Energy Physics," PRX Quantum, vol. 4, no. 2, p. 027001, May 2023, doi: 10.1103/PRXQuantum.4.027001.
Q. Liu, "Comparisons of Conventional Computing and Quantum Computing Approaches," HSET, vol. 38, pp. 502—507, Mar. 2023, doi: 10.54097/hset.v38i.5875.
O. Kyriienko, A. E. Paine, and V. E. Elfving, "Protocols for Trainable and Differentiable Quantum Generative Modelling," arXiv, Feb. 16, 2022. Accessed: Aug. 21, 2023. doi. 10.48550/arXiv.2202.08253.
E. Cocchi et al., "Equation of State of the Two-Dimensional Hubbard Model," Phys. Rev. Lett., vol. 116, no. 17, p. 175301, Apr. 2016, doi: 10.1103/PhysRevLett.116.175301.
V. Celebonovic, "The two-dimensional Hubbard model: a theoretical tool for molecular electronics," J. Phys.: Conf. Ser., vol. 253, p. 012004, Nov. 2010, doi: 10.1088/1742-6596/253/1/012004.
C. Miles et al., "Correlator convolutional neural networks as an interpretable architecture for image-like quantum matter data," Nat Commun, vol. 12, no. 1, p. 3905, Jun. 2021, doi: 10.1038/s41467-021-23952- w.
H.-C. Jiang and T. P. Devereaux, "Superconductivity in the doped Hubbard model and its interplay with next-nearest hopping t'," Science, vol. 365, no. 6460, pp. 1424—1428, Sep. 2019, doi: 10.1126/science.aal5304.
X.-W. Guan, "Algebraic Bethe ansatz for the one-dimensional Hubbard model with open boundaries," J. Phys. A: Math. Gen., vol. 33, no. 30, pp. 5391—5404, Aug. 2000, doi: 10.1088/0305-4470/33/30/309.
"Analysis of Algorithms — Big-O Analysis." GeeksforGeeks. Available: https://www.geeksforgeeks.org/analysis-algorithms-big-o-analysis/article-meta-div/ . Accessed: August 24, 2023.
J. W. Z. Lau, K. H. Lim, H. Shrotriya, and L. B. Kwek, "NISQ computing: where are we and where do we go?," AAPPS Bull., vol. 32, no. 1, p. 27, Sep. 2022, doi: 10.1007/s43673-022-00058-z.
A. Tranter, P. J. Love, F. Mintert, and P. V. Coveney, "A Comparison of the Bravyi— Kitaev and Jordan—Wigner Transformations for the Quantum Simulation of Quantum Chemistry," J. Chem. Theory Comput., vol. 14, no. 11, pp. 5617—5630, Nov. 2018, doi: 10.1021/acs.jctc.8b00450.
M. Treinish, "Qiskit/qiskit-metapackage: Qiskit 0.44.0." Zenodo, Jul. 27, 2023, doi: 10.5281/ZENODO.2573505.
M. Watabe, K. Shiba, M. Sogabe, K. Sakamoto, and T. Sogabe, "Quantum Circuit Parameters Learning with Gradient Descent Using Backpropagation," 2019, doi: 10.48550/ARXIV.1910.14266.
H.-J. Liao, J.-G. Liu, L. Wang, and T. Xiang, "Differentiable Programming Tensor Networks," Phys. Rev. X, vol. 9, no. 3, p. 031041, Sep. 2019, doi: 10.1103/PhysRevX.9.031041.
A. Kathuria, "PyTorch 101, Part 1: Understanding Graphs, Automatic Differentiation and Autograd," Paperspace Blog. Available: https://blog.paperspace.com/pytorch-101-understanding-graphs-and-automatic-differentiation/ . Accessed: August 24, 2023.
P.-F. Zhou, R. Hong, and S.-J. Ran, "Automatically Differentiable Quantum Circuit for Many-qubit State Preparation," 2021, doi: 10.48550/ARXIV.2104.14949.
J. Tilly et al., "The Variational Quantum Eigensolver: a review of methods and best practices," 2021, doi: 10.48550/ARXIV.2111.05176.
J. M. Clary, E. B. Jones, D. Vigil-Fowler, C. Chang, and P. Graf, "Exploring the scaling limitations of the variational quantum eigensolver with the bond dissociation of hydride diatomic molecules," Int J of Quantum Chemistry, vol. 123, no. 11, p. e27097, Jun. 2023, doi: 10.1002/qua.27097.
S. Raubitzek and K. Mallinger, "On the Applicability of Quantum Machine Learning," Entropy, vol. 25, no. 7, p. 992, Jun. 2023, doi: 10.3390/e25070992.
R. Orus, "A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States," 2013, doi: 10.48550/ARXIV.1306.2164.
R. Orus, "Tensor networks for complex quantum systems," 2018, doi: 10.48550/ARXIV.1812.04011.
X.-Z. Luo, J.-G. Liu, P. Zhang, and L. Wang, "Yao.jl: Extensible, Efficient Framework for Quantum Algorithm Design," 2019, doi: 10.48550/ARXIV.1912.10877.
H. Huang, L. Ni, and H. Yu, "LTNN: An energy efficient machine learning accelerator on 3D CMOSRRAM for layer- wise tensorized neural network," in 2017 30th IEEE International System-on-Chip Conference (SOCC), Munich: IEEE, Sep. 2017, pp. 280—285. doi: 10.1109/SOCC.2017.8226058.
M. Wang, Y. Pan, Z. Xu, X. Yang, G. Li, and A. Cichocki, "Tensor Networks Meet Neural Net-works: A Survey and Future Perspectives," 2023, doi: 10.48550/ARXIV.2302.09019.
Abstract Alzheimer's disease has been a case study for a long time, discoveries brought hypotheses and reasons related to the disease mechanism. This study aims to bring a clear understanding of the significant impact of human actions such as diet and Alzheimer's disease. Diet indirectly impacts the human body and significantly the brain, in the case of memory disorder diet affects the neurotransmitters that play an important role in the function of the brain and memory. The study used various relations between variables, hypotheses show that a transmembrane protein plays a key role in neurodegeneration and later it disturbs the neurotransmitter's function. Through the study, information showed that diet styles cause oxidative stress which is considered one of the main reasons that cause neurodegeneration. The study shows the impact of each variable on the other and leads these findings to the main case study of Alzheimer's disease.
I. Introduction
The brain is an Extortionary organ as it is responsible for thoughts, memories, and movement of the body systems and function, furthermore, the brain health acquires attention and care. Due to the brain's vulnerability to some disorders. cognitive brain disorders occur due to the detectable destruction of brain connections or networks between neurons, and neurodegenerative diseases such as Alzheimer's disease (AD), Parkinson's Disease (PD), Schizophrenia, Depression, and Multiple Sclerosis (MD). Some of these disorders are mainly caused by the imbalance of the neurotransmitter levels or the disturbance in its function. Neurotransmitters are basically messengers, molecules in the nervous system are used to transmit messages between neurons through synapses. Alzheimer's disease is a cognitive disorder, where About 55 million people have dementia, and %60 of this population has Alzheimer's disease. The onset stage of this disease is memory loss, and in later stages, the patients could have problems with speaking, eating, and swallowing or the disability to walk. Yet the neurological disorders in some cases of its distributions are in some point revolve around human actions and habits such as diet and how it could increase the chances of Alzheimer's disease development. This paper will focus on contributing relations and mechanisms in terms of represented information on the impact of diet on Alzheimer's disease and presenting relations between different variables that have an important role in the study, variables due to diet impact such as oxidative stress others belong to Alzheimer's disease causes such as amyloid beta, and acetylcholine. The study contributes to the relationship between these variables and their significant role in the case of Alzheimer's disease and diet.i. Inflammatory Diet
The body's natural, important defense against microbial infections, tissue damage, and trauma is called inflammation. It supports the body's ability to recover from injury and protect itself. Inflammation promotes the immune response by enlisting innate immune cells that are capable of producing inflammatory cytokines (i.e., signaling proteins). Particularly, the innate immune system of the body is under the control of the gene transcription factor kappaB (NF-κB). However, Inflammation can be harmful if it persists for a long time. When (NF-κB) is continuously activated as a result of consuming inflammatory food, inflammation becomes chronic. Chronic inflammation causes a dysregulated immune response, which disturbs physiological functions that are meant to be homeostatic. Inflammation hugely contribute to causing diseases such as inflammatory bowel disease (IBD), [diabetes mellitus], [asthma], cardiovascular diseases, [depression,] Alzheimer's disease (AD), and different types of cancerii. Neurotransmitters
Mechanism: Neurotransmitters are the nervous system's way of delivering messages to the rest of body organs and parts through nerve cells. Each neurotransmitter is responsible for a function, which may be moving a certain muscle, activating memory or appetite, controlling blood pressure, .... etc.iii. Alzheimer's disease
Alzheimer's disease is a progressive brain disorder that deteriorates with time, characterized by significant changes in the brain due to the accumulation of certain protein deposits. This process ultimately leads to brain atrophy and, in some cases, brain cell deathII. Methods
The study contributes the different relation between variables and how these variables impact each other to better understand the mechanism of Alzheimer disease, with a clear understanding of these relations, the study could demonstrate the significant impact of diet on Alzheimer's Disease. The aim connection is between diet and Alzheimer's Disease, the variable relation is the connection dots which start with Amyloid- β as the key role in Alzheimer's disease pathogenesis and considering the role of the neurotransmitters. On the other side of the study, the diet has various styles with each having its significance though this study contributes the diet styles that have a significant correlation with Alzheimer's disease, in terms of the diet styles such as vegetarian diet that cause oxidative stress which is responsible for the inflammation and neurodegenerative disorders in the neurons. The significant contribution is the experimental data of the vegetarian diet food and its impact on disturbing vitamins and mineral levels in the body, in that matter, vitamins deficiency such as vitamin B12 and vitamin D that vegetarian diet food lack, these vitamins absent or deficiency could increase the risk of Alzheimer's disease.(I) Role of Amyloid- β in Alzheimer's disease pathogenesis
i. Biochemistry of Amyloid β-Protein and Amyloid Deposits in Alzheimer's Disease
ii. Amyloid beta and acetylcholine
III. Results
i. Vegetarian diet lack of vitamins associated with AD.
The vegetarian diet is a diet style based on consuming plant-based food, however, this diet style may be beneficial in many aspects but it was noticed that this type of diet may increase the risk of some cognitive diseases such as Alzheimer's disease, due to the plant-based food lack of some essential vitamins such as vitamin D and vitamin B12. Vitamin B12, are essential water-soluble micronutrients that have to be consumed in sufficient quantities in the diet. They are necessary for preserving hematopoiesis and the health of neuronsIV. Discussion
Oxidative stress is a condition caused by an imbalance between production and accumulation of oxygen reactive species (ROS), including superoxide radicals (O2), hydrogen peroxide (H2O2), hydroxyl radicals (OH), and singlet oxygen (^1^O2), in cells and tissues and the ability of a biological system to detoxify these reactive products. The majority of ROS are produced by mitochondria. Cellular respiration, the lipoxygenases (LOX) and cyclooxygenases (COX) involved in the metabolism of arachidonic acid, as well as endothelial and inflammatory cells, can all produce O2ii. Oxidative stress and the amyloid beta peptide in Alzheimer's disease
iii. Oxidative stress and diet
Reactive Oxygen Species (ROS) and Reactive Nitrogen Species (RNS), which are involved in metabolism, development, and stress response, are essentially what produce oxidative stress. This unbalanced state may cause oxidative damage by oxidative modification of cellular macromolecules, structural tissue damage from cell death via necrosis or apoptosis, and cell death via apoptosis. ROS are highly reactive molecules with unpaired electrons that can affect how biological processes work. Proteins, lipids, and nucleic acids may undergo structural and functional damage as a result of oxidative stress. Oxidative stress is basically caused by mitochondria via oxidative phosphorylation that generates intracellular ROS. Meanwhile, ROS causes a mitochondrial malfunction. The NADPH oxidase family (NOX) and oxidative phosphorylation in mitochondria are the main sources of H2O2. For an organism to remain in a healthy state, the proper ROS are required. However, excessive ROS has been linked to a variety of health issues, such as obesity, cancer, cardiovascular disease, and neurological illnesses. Cognitive diseases such as Parkinson's disease and Alzheimer's disease are examples of neurodegenerative diseases that affect the elderly. They are characterized by a progressive loss of neuron cells and diminished mobility or cognitive function. Mitochondrial dysfunction is one of these disorders' key traits. To supply the energy requirements for cellular functions, particularly the synthesis of neurotransmitters and synaptic plasticity, the mitochondria in neurons have a crucial role. Increased mitochondrial permeability, mitochondrial disorganization, oxidative damage to mtDNA, weakened antioxidant defenses, and shortening of telomeres are all associated with mitochondrial dysfunction. Due to their high energy needs, high fatty acid content, high mitochondrial density, and low bioavailability of antioxidant compounds and antioxidant therapy, neurons are particularly vulnerable to oxidative stress. In Alzheimer's disease, Aβ aggregation causes Ca2+ release from the endoplasmic reticulum to the cytoplasm, which could lead to a decrease in the accumulation of ROS. Neuronal functions are impaired, which further leads to neuroinflammation and neuronal loss. Diet can control mitochondrial disease. A study found that patients with Lennox-Gastaut syndrome (LGS), a common form of refractory epilepsy, with mitochondrial dysfunction, experienced a significant clinical improvement in both seizures and cognitive performance. Patients with heart failure benefit from taking more docosahexaenoic acid (DHA), an n-3 polyunsaturated fatty acid. By binding to membrane phospholipids, lowering the viscosity of mitochondrial membranes, and speeding up the uptake of Ca2+, DHA supplementation can enhance the DHA content in mitochondrial phospholipids, preventing the onset of left ventricular failure. Unpaired electrons are produced by cells during regular cellular respiration as well as under stressful conditions, usually via oxygen- or nitrogen- based byproducts. These highly unstable pro-oxidant compounds have the potential to oxidize nearby biological macromolecules. Over time, the development and buildup of reactive pro-oxidant species can harm lipids, carbohydrates, proteins, and nucleic acids. This oxidative stress has the potential to exacerbate a number of age-related degenerative disorders, including Alzheimer's and Parkinson's. There are various diet styles and one of the most common diet styles is the Western diet, which is considered by the high intake of saturated fats, highly refined carbohydrates, and animal-based protein and a deficiency in the consumption of plant- based fiber. It has been shown that people who eat a Western diet food are more likely to develop chronic disease and have higher levels of oxidative stress. In the Western diet style, consuming too much fat causes oxidative stress, mitochondrial damage, and inflammation. High-calorie diets disrupt redox processes, accelerating aging signs and raising the risk of chronic diseases. Antioxidants that include vitamins, and minerals can counteract this oxidative damage as they play a role in protecting the cell from free radicals such as ROS and RNS. Therefore, diet plays a crucial role in health, via specific food consumption that could disturb antioxidant levels in the body, therefore affecting cellular function and disease risk, emphasizing the importance of balanced nutrition and lifestyle interventions.V. Conclusion
In conclusion, this research adds to our understanding of the complex interactions between nutrition, inflammation, neurotransmitters, and oxidative stress in Alzheimer's disease. It underlines the importance of dietary and lifestyle changes in lowering the risk of Alzheimer's disease and lays the groundwork for future research into targeted therapeutics. As we continue to learn more about the complexity of neurodegenerative disorders, a multifaceted strategy that includes nutrition, neurobiology, and oxidative stress management may hold the key to a better future for people at risk of Alzheimer's disease. Our research has revealed numerous crucial findings, offering light on prospective routes for additional investigation and therapeutic approaches. Our findings highlight the significant role of nutrition in the development and progression of Alzheimer's disease. We've discovered that an inflammatory diet high in processed foods, red meats, sweet desserts, and high-sugar beverages promotes chronic inflammation, a known cause of AD. In contrast, we've highlighted the protective potential of anti-inflammatory foods like Omega-3 fatty acids, vitamins, and polyphenols, which may reduce the risk of AD. Amyloid-β (Aβ), a key player in AD pathogenesis, has been explored in the context of its interactions with neurotransmitters and oxidative stress. Our findings suggest that Aβ accumulation can disrupt cholinergic and serotonergic neurotransmission, contributing to cognitive decline. Moreover, the interplay between Aβ, metal ions, and oxidative stress may accelerate neurodegeneration. Oxidative stress emerges as a common denominator linking inflammatory diets, neurotransmitter dysregulation, and Aβ toxicity in Alzheimer's disease. The Overproduction of reactive oxygen and nitrogen species (ROS and RNS) causes cellular damage, notably in mitochondria, and contributes to neurodegenerative processes. Managing oxidative stress by food and antioxidants may provide therapeutic benefits. Our findings highlight the possibility of dietary treatments in reducing the risk and course of Alzheimer's disease. Individuals who follow an anti- inflammatory diet may reduce chronic inflammation, support neurotransmitter balance, and combat oxidative stress, boosting brain health and memory preservation.VI. Refernces
Abstract In our never-ending cosmic quest to understand the origin of the universe, galactic evolution is profound, as galaxies make up a considerable portion of our universe. Additionally, understanding galactic evolution and the phenomena involved will provide us with the necessary foundation to make predictions about the fate of galaxies and our Milky Way. Although many advances have been made in the field of cosmology, our knowledge of galactic formation and evolution still possesses key gaps. To understand galactic evolution, cosmologists design models of the universe and simulate galactic evolution under the effects of dark matter, dark energy, and baryonic matter. In this article, we mainly discussed two types of frameworks used in designing galactic simulation models: the semi-analytic and numerical hydrodynamic frameworks; we also talked briefly about the N-body and Lamda Cold Dark Matter frameworks. Numerical hydrodynamic simulations provide a tool for investigating the complex and dynamic interactions between numerous physical processes. Meanwhile, semi-analytic models use analytical approximation techniques to handle a range of variables. When choosing a simulation model, the computational complexity of the hydrodynamics framework and the uncertainty of the semi-analytic models prove to be a conundrum. So, we have discussed both frameworks, comparing their advantages and limitations. We concluded that currently both frameworks complete each other's shortcomings, and for a superior understanding of galactic formation and evolution, a more comprehensive framework that combines both approaches is needed.
I. Introduction
There is still a lack of knowledge regarding the mechanisms that led to the formation and evolution of the earliest galaxies, following the Big Bang. Understanding the mechanisms underlying their formation and evolution is crucial to understanding how galaxies will develop in the future. According to previous researchII. The Physical Processes of Formation and
The models used to comprehend and implement the formation and evolution of galaxies mostly include the physical processes covered in this section.i. Gravity
The first galaxies are thought to begin as small clouds of dust and stars, and as other clouds come near them, gravity ties them together. The cosmological parameters and the characteristics of dark matter affect the shape and magnitude of the primordial power spectrum of density fluctuations. The number of dark matter halos of a particular mass that have collapsed at any given moment can be calculated from this spectrum, which is processed by gravity to determine how quickly these halos expand throughout cosmic time through merging and accretionii. Hydrodynamics and Thermal Evolution
Intense shocks are created during the collapse of an excessively dense province made of gas and dark matter, which raises the entropy of the gas. The gas's ability to reflect thermal energy away and cool down successfully will then decide how the gas evolves in the future. Two-body radiative processes are the main cooling mechanisms important for galaxy formation during most of cosmic history. At a temperature of more than 10than 10^7^ Kelvin, gas becomes entirely collisionally ionized and cools mostly through bremsstrahlung (free-free emission)iii. Star Formation
Once the gas has condensed into the halo's central regions, it may start to self-gravitate or be driven more by its gravity than by dark matter. If cooling processes prevail overheating, then a runaway process can occur in which Giant Molecular Cloud (GMC) complexes form, and eventually some dense cloud cores within these complexes collapse and reach the extreme densities required to ignite nuclear fusion. This is because gas cools more quickly the higher its density, so if cooling processes dominate, then a run-away process can occuriv. Star Formation Feedback
Less than 10% of the current global baryon budget, according to observations, is in the form of stars. We would anticipate that most of the gas would have cooled and generated stars by the present day in Cold Dark Matter (CDM) models without any sort of "feedback" (or suppression of cooling and star formation). This "overcooling problem" was acknowledged even by the forerunners of the earliest models of galaxy formation within a CDM framework, who hypothesized that the energy from supernova explosions may heat gas and possibly blow it out of galaxies, impeding star formationv. Black Hole Formation and Growth
As the remains of Population III (metal-free) stars, through the direct collapse of extremely low angular momentum gas or stellar dynamical processes, the first black holes may have formed in the early universevi. Active Galactic Nuclei (AGN) Feedback
Strong observational evidence suggests that a supermassive black hole is present in most spheroid- dominated galaxies, which make up most of all big galaxies. A straightforward calculation shows that the energy released in creating these black holes must be greater than the host galaxy's binding energy, indicating that it could have a significant impact on galaxy formation. However, how effectively this energy can couple to the gas in and around galaxies is still unknownvii. Stellar Populations and Chemical Evolution
Many modelers convolve their predicted star formation histories with straightforward stellar population models that provide the Ultraviolet — Near Infrared Spectral Energy Distribution for stellar populations of a single age and metallicityviii. Radiative Transfer
Star and AGN radiation can have a significant effect on galaxy formation. Gas can be directly heated by radiation, and it can also change cooling rates by altering the ionization state of the gas (particularly for gas that is metal-enriched)III. Current Simulation Frameworks
The many advances happening in cosmology over the past decades have enhanced our understanding of galactic formation and evolution. This resulted in a heap of frameworks and models that can simulate galactic evolution to a relatively great extent based on sub-grid recipes. Due to computational limitations, these sub-grids are used and parametrized, and these parameters are tuned based on current observations of galactic properties. Although some of these techniques have been able to reproduce current observations and give us many insights into galactic evolution, their accuracy remains questionable. Three popular, and currently used, frameworks are the Semi-analytic framework, the Numerical Hydrodynamics framework, and the Lambda, Einstein's cosmological constant, Cold Dark Matter (ΛCDM) framework. It is worth clarifying that the ΛCDM framework is in itself built on the hydrodynamics framework and was considered a hydrodynamic model until it was publicly accepted and used as a framework to build variants of the ΛCDM model. Thus, we will be limiting our discussion to a brief introduction to the ΛCDM framework.i. The Semi-analytic Framework
The semi-analytic framework, sometimes called the "phenomenological galaxy formation framework," approaches each of the physical processes mentioned above using approximate, analytic techniques. Due to this approximation, semi-analytic models possess a modular framework, which means it is straightforward to revise the implementation of various phenomena to reproduce a more detailed simulation according to current observationsii. The N-body/Numerical Hydrodynamics Framework
The N-body framework (or gravity solvers) is the basic structure for various simulation models (e.g., hydrodynamic models and even semi-analytic models). In N-body models, the simulated portion is divided into a chosen number of particles, or "bodies," hence the name N-body. Then, the forces acting on each particle by the surrounding ones are computed, and the simulation evolves by recomputing the forces in a set time step. Additionally, the boundaries of the simulation volume are comoving and evolve periodically, and the expansion rate of the simulation boundaries is computed using the Friedmann equations (derived from Einstein equations within the context of General Relativity), but the equations are solved using the Newtonian versions since General Relativity corrections are mostly negligibleiii. The Lambda Cold Dark Matter (ΛCDM) Framework
Our modern theory of cosmology suggests that the universe mainly consists of dark matter and dark energy, which together account for more than 95% of the energy density of the universe. In the most popular ΛCDM model, dark matter is considered cold (slow-moving), collisionless, and makes up ~25% of the cosmic mass-energy density, and dark energy is represented by a "cosmological constant" Λ, comprising ~70% of the cosmic mass- energy density. The remaining ~4% is baryons (which in the context of this model includes leptons), i.e., ordinary atoms that make up the universe we can seeIV. Semi-analytic Models
The approach known as "semi-analytic modeling" or "phenomenological galaxy formation modeling" uses an analytical strategy to tackle the many physical processes related to galaxy formationV. Numerical Hydrodynamic Models
Hydrodynamic models are employed to solve the equations of the physics concerned with galactic evolution (e.g., hydrodynamic equations) via direct simulation. In this method, the equations of gravity, hydrodynamics, thermodynamics, and radiative cooling/transfer are solved for a chosen number of points, depending on the available computational power, in a grid (particle-based) or along the flow path of a fluid (mesh-based), or a hybrid of both, depending on the specifics of each model. These three methods are classified, respectively, as Lagrangian methods, Eulerian methods, and arbitrary hybrids of both. In Lagrangian methods such as the popular Smoothed Particles Hydrodynamics (SPH) model, the particles are treated as programming objects, where each particle carries the information about the fluid and moves freely within it. However, asVI. methods
We used two inclusion/exclusion criteria to narrow our research down to the focus of this paper. First, we included the research papers related to, and only to, the hydrodynamics and semi-analytic frameworks and the implementation of different physical processes through them.VII. Conclusion
From the physical processes, we can conclude that the implementation of star formation as a sub-resolution model with individual stars as its building blocks will still be necessary in future cosmological simulationsVIII. Acknowledgment
We cannot deny the tremendous help we got while writing this article. We would like to thank all the amazing people at Youth Science Journal for their oversight and guidance. We would, also, love to thank our mentor Mustafa Mohammed for his guidance and support.IX. References
Abstract This paper gives an overview of the advancements in Artificial intelligence (AI) for the prognosis and treatment of cardiovascular disease (CVDs). AI techniques, inclusive of machine learning algorithms, have proven promise in analyzing medical photos for computerized detection of cardiac abnormalities and risk stratification. Decision-guide systems driven by way of AI useful resources in optimizing remedy strategies via leveraging patient facts for personalized interventions. Integration of AI with wearable devices and faraway monitoring structures allows real-time facts collection, early detection of cardiac activities, and powerful remote care control. However, demanding situations associated with information privateness, set of rules bias, and regulatory frameworks need to be addressed. Collaborative efforts amongst clinicians, researchers, and policymakers are crucial for harnessing the whole capacity of AI in CVD care.
I. Introduction
According to the World Health Organization, cardiovascular disease (CVD) is the most prevalent mortality determinant in the world, taking an estimated 17.9 million lives each year, which is approximately one-third of global mortalityII. AI and Electrocardiograms
i. The potential of utilizing AI in CVD diagnosis
Digital healthcare encompasses the provision of tailored health and medical services, the utilization of electronic devices, systems, and platforms, as well as the integration of a wide range of medical servicesWear-able devices
The utilization of wearable devices in the health sector is advancing rapidly, particularly in the areas of telemedicine, patient tracking, and mobile health systems. The utilization of these devices for remote monitoring and diagnostics of common cardiovascular diseases has been the subject of researchRisk Prediction Models
A risk prediction model is a statistical regression model that relates the disease outcome with the characteristics of an individual. Risk prediction models are commonly referred to as risk stratification models or prognostic models. A risk prediction model typically includes multiple risk factors (or predictors) that are significantly related to the disease outcome. The association of a risk factor with the outcome of the disease is assessed based on the relative risk associated with that risk in the population, rather than in a single individual. A risk score may be calculated from a Risk prediction model for each individual, with a higher risk score indicating an increased risk of the disease. The risk score can be used to classify individuals into groups with different levels of risk of the disease. People in the high-risk groups are targeted for intervention strategiesii. Electrocardiograms: properties and advantages.
iii. Types of deep-learning models used in ECG analysis.
Deep learning (DL) is a class of machine learning that performs much better on unorganized or huge data with increased high-performance computing, which made it more popular at present. It focuses on creating and training complex neural networks to learn and make intelligent decisions from large volumes of data. Deep learning is called "deep" as it passes the data through numerous layers, where each layer can gradually extract features and pass the data to the next layer. The first layers extract low-level features, and the later layers combine features to create a comprehensive representation. Deep learning models are built using artificial neural networks, which are computational structures inspired by the organization of neurons in the human brain. These networks consist of layers of interconnected nodes (neurons) that process and transform data. Nowadays, deep learning is used in a lot many applications such as Google's voice and image recognition, Netflix and Amazon's recommendation engines, Apple's Siri, automatic email and text replies, and chatbotsIII. Convolutional AI in ECG Analysis:
i. Convolutional neural networks (CNNs) and their applications.
As mentioned previously, CNNs are the most prominent category of neural networks, especially in high-dimensional data like images and videos. It falls under the supervised learning category of neural networks. CNN is a multi-layer neural network, which consists of multiple back-to-back layers connected in a feed-forward manner [20, 22]. It is stimulated by the neurobiology of the visual cortex, which contains convolutional layer(s) pursued by fully connected (FC) layer(s), with the probability of the existence of subsampling layers between these two layersii. Discuss how CNNs are adapted for ECG analysis.
The analysis has three main steps: data preprocessing, feature extraction, and classification. The ECG signal is characterized by high noise and high complexity, therefore during the preprocessing stage, the signals are denoised and padded or cut into segments with equal sizes. In feature extraction, features can be extracted from the morphology of the ECG signal in the time and frequency domain or directly from the heart rhythmDetecting Myocardial Infarction (or heart attacks) using CNNs:
In this study, CNNs are used to detect myocardial infarction (MI) without relying on the detection of ST deviation or T peak and without extracting handcrafted features. Instead, it utilizes continuous wavelet transform and a CNN architecture to process the ECG data as 2D images. The ECG signal is divided every five seconds and normalized to the normal distribution. "The data segment is passed to a continuous wavelet transform with bior1.5 mother wavelet and scale from 1 to 256"iii. Benefits of CNNs in ECG analysis
As already discussed above, the ECG is a powerful tool in the hands of cardiologists as it can lead them to detect premature cases based on analysis of the formed waves. While this is a very common method in ECG analysis, it can lead to a variety of human errors that can cost people their lives. This is the reason that research into CNNs, as discussed above, has been heavily leaned on. The exact reason that a deep learning AI like CNN trumps humans is that its interpretation heavily differs from one cardiologist to another. This is because humans can interpret the different signals and rhythms differently due to either different backgrounds and experiences, not taking sex, age, and ethnicity into account, or being biased towards one view before analyzing the test. The CNN algorithm takes all the previous into account as it can conclude certain phenotypes through a patient's electrocardiogram reading thus rendering itself superior to an average cardiologist, or experts in some cases, as will be proven later in this paperIV. Autocardiogram necessity
i. Integration with new technologies: body sensors, MRI, echo, and more
The utilization of cutting-edge technology has become increasingly pertinent in the treatment and diagnosis of cardiovascular disordersii. AI-analyzed ECGs: Accurate decision-making and complications prediction
Electrocardiogram (ECG) analysis is a way of assessing and tracking cardiac hobby by studying the electrical indicators produced in the course of cardiac cyclesV. Conclusion
After reviewing multiple research papers and filling in others' gaps, the paper was able to deduce the validity of AI-aided ECG analysis. It was first concluded that autonomous analysis of an electrocardiogram using a deep-learning AI method called a convolutional neural network. This method proved a high success rate as it successfully deciphered the patterns of the ECG and their implications. Not only that but it was able to accurately detect cardiovascular disease at a higher success rate than cardiologists and much earlier. It also proved to be able to interact with IOT technologies such as body scanners and smartwatches to offer 24/7 tracking of the human heart without intrusion or discomfort and at great accuracy. Finally, it was able to perceive complications due to CVDs before their occurrence and prevent the advancement of the disease. While it is not known when this technology will be widely available to the public, it has without a doubt proven itself. Also while this tech is highly accurate and precise, it takes years to train the algorithms responsible for it, which might render it highly impractical until a database is established.VI. References
Abstract For thousands of years, humans have marveled at auroras — lights from solar winds interacting with Earth's magnetic field. Although we know a lot about auroras in our solar system, we are just starting to study them on exoplanets. Studying these lights on other planets helps us learn about their magnetic fields, winds, and atmospheres. This review dived into emerging exoplanetary aurora research, studying how we detect them from far away. We discussed what these auroras can tell us about magnetism, habitability, and possible life beyond our solar system. The Aurora borealis indicates planet habitability by indicating a magnetic field, which is crucial for life as it helps to maintain water on the surface of the planet and protects it from harmful radiation from its parent star. The study of auroras in exoplanets is essential to our understanding of the universe and its diverse phenomena. We also explored the challenges of observing exoplanetary auroras, shedding light on future discoveries about these radiant displays. Delving more into potential biosignatures, it can be stated that the magnetic fields of the exoplanets can be one of the leading signals of their habitability. According to our analysis of various works such as those of Ramirez and Lazio, who considered magnetic fields to be of great importance in protecting habitable zones, we concluded that auroras are reliable indicators of any form of life on exoplanets.
I. Introduction
Auroras, also known as polar lights, are natural light displays that occur in the polar regions of planets. They are instigated by solar ionized particles plummeting into the Earth's upper atmospheric layer at velocities of up to 45 million mph. The magnetic field of the Earth afterward guides the particles toward the Arctic and Antarctic regions. The charged particles enter Earth's atmosphere, exciting gas atoms to generate auroras. The color of auroras is determined by the gas mixture present in the atmosphere. The aurora's green color comes from oxygen, while nitrogen creates purples, blues, and pinks. There are several places around the world where auroras often appear. Some of the popular destinations include Iceland, Norway, Finland, Sweden, Canada, and Alaska. While auroras have been extensively studied in our solar system, little is known about auroras in exoplanets. Observations and potential findings about auroras in exoplanets have been limited because of the difficulty in identifying them from Earth. However, new advancements in satellite telescope technology such as the Low-Frequency- ARray (LOFAR), an SKA predecessor that has exceptional sensitivity at 150 MHz, opened the possibility of detecting radio waves from neighboring exoplanets and their host stars, enabling us to make more exact observations. For instance, in research 'II. Early studies
i. Pre-2000 studies on the presence of exoplanetary auroras
In this part of the section, we provide a short overview of 'historical' (pre-2000) works on auroras in exoplanets. The exploration of auroras in exoplanets prior to the year 2000 was limited due to technological constraints. Hence, studies on auroras in exoplanets were primarily theoretical and conceptual. However, it laid the foundation for understanding the potential presence and characteristics of these fascinating phenomena beyond our solar system. Researchers laid the groundwork by adapting our understanding of magnetospheric physics to potential exoplanetary scenarios and set the stage for future advancements. The work of Donahue and colleagues in the late 1970s was instrumental in shaping early discussions on exoplanetary aurorasii. Post-2000 studies on the presence of exoplanetary auroras
Over the past few years, there has been an upturn in the study of auroras in extrasolar planets, highlighted by noteworthy improvements in observational methods and theoretical models. In this part of the section, we dive into the post-2000 era of research on this engaging phenomenon, marking the significance of this period. Focusing on work after the year 2000 is crucial because of the accelerated speed of technological advancements and the coming breadth of our understanding of planets beyond our solar system. Ongoing progress in space-based telescopes, spectrographic techniques, and computer simulations have granted scientists the opportunity to gain new insights into the proportions of atmospheres and magnetospheres in exoplanets. Extending this line of reasoning, the study by W. M. Farrell et al theorized that magnetized exoplanets may emit radio frequencies, similar to the planets in the Solar system with emissions reoccurring during the planetary rotation periodIII. Auroras
i. Auroral formation mechanisms
Auroras occur as a result of interaction between charged particles from the Sun and the magnetic field of the Earth. Specifically, the heat from the Sun's outermost layer of atmosphere, the corona, makes its hydrogen and helium atoms vibrate and shake off protons and electrons. These particles are then too fast to be contained by the Sun's gravity after which they group as plasma. It travels away from the sun and that is called the solar wind. When this electrically charged gas collides with atoms and molecules in the Earth's atmosphere, it emits energy in the form of light. This mechanism is, therefore, called "Solar Wind and Magnetosphere Interaction" (Figure 1)ii. Types of Auroras
iii. Aurora properties
Shape, altitude of emission, solar activity influence, and sounds are all various characteristics of auroras.IV. Aurora simulations
V. Auroras on other planets and moons
Recent developments in the exploration of planets have discovered auroral phenomena on different planets and moons in our solar system. This part of the section explores noteworthy examples of auroras out of Earth's confines, contributing to a deeper understanding of space weather phenomena. Jupiter, a gas planet known for its enormous magnetic field, provides a good example of auroras on a planetary scale. Driven by the interaction between Jupiter's magnetic field and charged particles, these auroras exceed Earth's in size and intensityIV. Magnetic field
Magnetic fields are a crucial phenomenon not only in our solar system but also beyond it. This is an important tool for observing, studying, and analyzing planets and giving them characteristics. It causes various phenomena such as sunspots, coronal heating, solar areas, and coronal mass ejections. Understanding the nature of matter in the Solar System is essential for understanding the mechanism generating Earth's geomagnetic field and other planets' magnetic fields. The magnetic field provides information about a body's internal structure and thermal evolution, as well as their histories. The presence and behavior of these fields are often triggered by electrical currents deep within the planet, providing insight into its physical state and dynamicsi. Planetary Magnetism Development
All discoveries in planetary Magnetic fields were made by observing magnetized objects such as the Earth or the Sun. It can be noticed that since ancient times people have known the existence of certain forces inherent in the Earth and the first recordings about it were written in 11^th^ century by the Chinese. Willian Gilbert made the first attempt to explain the mysterious phenomenon in 1600 presenting that the Earth was magnetic because it was rotating. However, another explanation was proposed by Heinrich Schwabe in 1826-1843 where he discovered the 11-year sunspot cycle and noted that geomagnetic storms corresponded to sunspot maximumsii. Magnetic dynamo theory's general concepts
iii. What then generates the magnetic field?
The magnetic dynamo theory suggests that a magnetic field is created by swirling motions of liquid conducting material in planet interiors. Metallic materials, such as hydrogen in Jupiter and Saturn, have free electrons that can move around, forming a magnetic field. To sum up, a planet's magnetic field is generated by a moving charge and a liquid conducting material in its interior. Rapid rotation increases the stirred material, making the magnetic field stronger. However, if the liquid interior becomes solid or rotation slows, the magnetic field weakensiv. Properties of Planetary Magnetic Fields
There are two types of magnetic fields, namely remanent and intrinsic fields with an intermediate form induced by external forces. Remanent fields indicate an object that was once magnetized and still retains magnetism, while intrinsic fields are active phenomena resulting from an object's property. Most planetary magnetic fields are self-sustaining intrinsic fields generated by an internal dynamo, following the model proposed by Parker (1955) and later modified to become Kinematic Dynamo Theory (Fortes 1997). This model requires a planet to have a molten outer core of a conducting material, convective motion within the core, and an energy source to power the convective motion. Remanent fields are stable and not suitable for fields with changing polarityv. Earth-like exoplanets' magnetic fields
The study presents magnetic dipolar moment estimations for terrestrial planets with masses and radii up to 12 ME and 2.8 RE, and different rotation ratesV. Influence of auroras on the habitability
Modern Earth is exposed to XUV and particle emissions from the quiescent and active Sun, including solar-wind plasma with embedded magnetic fields. Extremes of these external influences occur in the form of solar flares, CME- driven plasma, magnetic-field enhancements, shocks, and related solar-energetic particles (SEPs). Galactic cosmic rays (GCRs) diffuse into the inner heliosphere and the planetary atmospheres, with specific atmospheric and surface influences potentially consequential for our technological society and the biosphereVI. Impact on Proxima b's habitability
Proxima Centauri is the third and smallest member of the triple star system Alpha Centauri, the closest star to our Solar System. Proxima Centauri has two known exoplanets and one candidate exoplanet: Proxima Centauri b, Proxima Centauri d, and the disputed Proxima Centauri c. Proxima Centauri b, with a minimum mass of 1.27 M☉ and an orbital period of 11.2 days, is clearly located in the circumstellar habitable zone (CHZ). This information is supported by various studies and researchVII. Previous research on detecting habitability using the aurora and magnetic field
Within this specified part of the section, we present a concise summary of scholarly investigations that explore the influence of auroral activity on the potential habitability of exoplanets. These studies are of crucial importance in our quest to identify habitable worlds beyond our solar systemVIII. Discussion
The main factors influencing the habitability of exoplanets include the evolutionary phase of bolometric luminosity and magnetic activity of host stars, their impact on a planet, and internal planetary dynamics. For some planets, the luminous pre-main sequence phase of the host star may have driven the planet into a runaway greenhouse state before it had a chance to become habitable. The composition of the planet, related to elemental abundances for its host star, may determine its interior structure and influence the tectonic history of the planet. Finally, due to the proximity of the planet to the star and the weakening of the magnetospheric field due to tidal locking, the stellar environment could lead to a strong bombardment of the atmosphere with high- energy particles and UV, either stripping the planet of its atmosphere or leaving its surface inhospitable to life. Conversely, auroras and magnetic fields can serve as valid signs of life-hospitable exoplanets, since they are interconnected with the factors mentioned earlier. Regarding the research of Ramirez and Lazio, magnetic fields play a crucial role in shielding a planet from harmful stellar radiation and charged particles. Thanks to an active magnetic field, the conditions necessary for life are maintained on the exoplanet. Similarly, the presence of auroras is often connected to a planetary atmosphere interacting with its magnetic field, creating colorful light displays. The detection of auroras on an exoplanet could suggest the presence of an atmosphere and an active magnetic field, which are vital for sustainable life as evident. Moreover, the observation of auroras and magnetic fields on exoplanets can be a key clue that life-supporting conditions may exist.IX. Conclusion
To sum up, exoplanetary auroras are mysterious yet captivating phenomena that humankind has marveled at for centuries which opened a unique window into the understanding of space weather, atmospheric conditions, habitability assessment of exoplanets, and, most importantly, their magnetospheric interaction with their stars' environment. All this brought new insights for technological advances and testing theoretical models. Researching this subject enabled us to unravel new aspects in areas that were once thought to be settled and unchangeable, such as the presence of auroras beyond our solar system and their potential habitability. Throughout our work, we explained the definition, characteristics, and formation mechanisms of auroras and the properties and concepts of planetary magnetic fields, highlighting the possibility of auroras being a sign of life on extrasolar planets. We decided to review this field because we wanted to identify gaps, such as the lack of research on the potential role of exoplanetary auroras as biosignatures and neglection of the diversity of exoplanetary environments across which auroras may vary. The study of auroras on planets beyond our solar system is a demonstration of the power of interdisciplinary collaboration in broadening our understanding of space. This field has brought together scientists from different scientific disciplines like astrophysics, planetary science, atmospheric science, and magnetospheric physics. Astrophysics enhanced our understanding of the behavior of host stars and the impact of stellar activity on exoplanet environments; planetary science informs us of exoplanetary atmospheres; atmospheric science explains the interactions between exoplanetary atmospheres and incoming stellar radiation; lastly, magnetospheric physics helps unravel the intricate interplay between exoplanetary magnetospheres. Exploring exoplanetary auroras is evidence of human curiosity and scientific ingenuity, driving us to delve more into the interactions between distant worlds and their host stars. The hunt for dancing lights in alien skies shows the complexity of exoplanetary systems, reminding us that our understanding of the universe is most vivid when we work together to decipher its most captivating mysteries.X. References
Abstract This research explores the intricate relationship between gene editing, DNA transcription, translocation mechanisms, and their impact on the mental health of individuals aged 18 and above. It seeks to contribute novel insights to this underexplored intersection, propose potential therapeutic interventions, and offer avenues for further exploration. The study extensively reviews two prominent gene-editing technologies: CRISPR-Cas9 and TALENs. It elucidates their mechanisms and identifies potential errors occurring during the final stages of Non-Homologous End Joining (NHEJ), which may lead to gene disruptions and mutations. These disruptions can significantly affect crucial gene functions related to neural regulation, brain development, and mental health disorders. Moreover, the study emphasizes the significance of Homology-Directed Repair (HDR) and its precision in effecting precise DNA sequence changes. It highlights the potential for errors during this process and their direct implications for neural processes and mental health. To address the impact of gene editing on mental health, a computational methodology using Python and the Biopython library is proposed. Focusing on the CRISPR-Cas9 method's activity on the IDH1 gene, which is associated with brain cancer, real-time monitoring through Bioluminescence Imaging (BLI) is recommended as a valuable tool for assessing gene editing efficiency and specific
I. Introduction
i. Genetic Editing and Its Relevance:
As the world trends towards gene editing, it's crucial to carefully consider its impact---not just the immediate benefits, but also the potential unintended consequences that can arise when venturing into uncharted territories without fully exploring the side effects. Gene editing, a form of genetic engineering, encompasses the addition, removal, alteration, or substitution of DNA within the genome of a living organism. Unlike earlier genetic engineering techniques that haphazardly integrated genetic material into a host genome, genome editing is precise, targeting specific sites for insertions.ii. DNA Transcription and Transposition:
In an ocean full of complex processes, DNA transcription, and transportation mechanisms arise as the most important processes that affect everything in the human bodyiii. Nexus Between Mental Health and DNA Processes:
Mental health problems have become significant in the biological world. A study in the USA estimated that around 18 to 26 percent of Americans aged 18 and older---about 1 in 5 adults---suffer from a diagnosable mental disorderiv. The Objectives:
Our scholarly research aims to elucidate the question: "How can computational biology be leveraged to unravel the DNA transcription and transposition mechanisms resulting from gene editing that impact the mental health of individuals above 18?" By collecting data, we will emphasize the relationship between gene editing, and its impacts on DNA transposition and translocation, and Mental health disorders. After analyzing data, we aim to utilize computational biology to harness all its benefits in modeling and detecting the processes of the effects of genetic editing on DNA transcription and translocation, which have a negative impact on mental health in order to propose potential therapeutic interventions for a cure.II. Literature review
This section aims to provide a comprehensive overview of the literature relevant to this research paper. It will summarize the potential mental issues that arise from the negative effect of gene editing, by monitoring DNA transposition and transcription between them. Each paragraph in subsection-i and subsection-ii will reference a specific paper and present a concise summary of its findings. Additionally, subsection-iii will highlight the unique contributions of our work. To conduct the literature search, Google Scholar was employed as the search engine. The keywords "gene editing," "Mental health issues," "DNA transcription disruption " and "DNA transposition" were utilized to retrieve relevant publications. The identified papers were then categorized based on their relevance, with the least relevant paper listed first and the most relevant paper listed last.i. Uncontrolled Gene Editing: Implications and Risks:
Inii. The Genetic Contribution to Mental Health Issues:
The studyiii. Our Contribution:
Our paper aims to address the existing paucity of literature concerning the deleterious effects of gene editing and its intricate interplay with DNA transposition and transcription, as they relate to mental health outcomes. While considerable research has been conducted on this topic as outlined in subsections i and ii our study seeks to make noteworthy contributions in the following ways:III. Genetic Editing and Mental Healt
i. CRISPR-Cas9 System:
CRISPR-Cas9's revolutionary gene-editing technology allows for precise manipulation of DNA. It entails using a single-guide RNA (sgRNA) to direct the Cas9 enzyme to particular DNA sequences. When Cas9 reaches its target, it causes DNA double-strand breaks. The cell's repair system then fixes these breaks, frequently through insertions or deletions (indels), which disrupts the gene. For more specific changes, a repair template can be offered as an alternative. Genes can be silenced, activated, corrected of mutations, and even have reporter genes inserted using this technology. Additionally, it makes it easier to study non-coding RNAs and epigenetic changes. The adaptability of CRISPR-Cas9 revolutionizes genetic research and has enormous therapeutic potential. CRISPR-Cas9's system is shown in Figure 1ii. Influence of TALENs on Mental Health:
Transcription Activator-Like Effector Nucleases (TALENs) are a powerful gene-editing technique that has emerged as an improvement over the Zinc Finger Proteins method. As shown in Figure 2TALENs consist of TALE repeats, represented as colored cylinders, and a carboxy-terminal truncated "half" repeat. Each TALE repeat contains two hypervariable residues represented by letters. The TALE-derived amino- and carboxy-terminal domains, essential for DNA binding, are depicted as blue and grey cylinders, respectively. The non-specific nuclease domain from the FokI endonuclease is illustrated as a larger orange cylinder.
TALENs function as dimers, binding to the target DNA site. The TALE-derived amino- and carboxy-terminal domains flanking the repeats may interact with the DNA. Cleavage by the FokI domains occurs within the "spacer" sequence, located between the two regions of DNA bound by the two TALEN monomers.
A schematic diagram illustrates the structure of a TALE- derived DNA-binding domain. The amino acid sequence of a single TALE repeat is expanded below, with the two hypervariable residues highlighted in orange and bold text.
The TALE-derived DNA-binding domain is aligned with its target DNA sequence. The alignment shows how the repeat domains of TALEs correspond to single bases in the target DNA site according to the TALE code. A 5' thymine preceding the first base bound by a TALE repeat is indicated
iii. The Errors Have Invaded Everything:
After the gene editing occurred however the process followed it goes for two pathways to repaired as shown in figure 3:IV. Computational methodology
As our research objective trends toward collecting data and concluding the process by which gene editing affects mental health, in this section, we will model this process using figures, which will be demonstrated in the discussion section. Our model will rely on the Python language with the Biopython library. We chose this library because it is a flexible and user-friendly Python library for modeling biological processes. It offers a wide range of features, compatibility across platforms, interoperability with bioinformatics tools, and strong community support. It is an important tool for computational biology research due to its open-source nature, scalability, and integration with data analysis libraries. In this modeling, we utilized the CRISPR method, one of the most commonly used gene editing methods recently, and the IDH1 gene with its sequence 'ACGTGCAGCTGGGTGGTTGTGGTTTGCTTGGCT TGAGAAGCAGGTTA...........', as it is the gene responsible for 50% of gliomas (brain cancer). Mutations in this gene, after NHEJ occurs, can lead to not only a lack of cancer cure but also abnormal production of 2- hydroxyglutarate (2-HG), an oncometabolite. This metabolic change interferes with numerous cellular pathways, primarily in the cytoplasm, affecting epigenetic control and causing DNA hypermethylation. These metabolic changes primarily drive cancer progression but can also indirectly impact mental health. Abnormal metabolites and related disruptions may induce neuroinflammation, neurotransmitter imbalances, and other neurological effects, potentially affecting mood and cognitive abilitiesCode Description
V. Ethical Consideration and Study Constrain
i. Ethical Consideration:
First of all, the study protocol and ethical considerations underwent rigorous review by the YSJ's Research Review Board. This board meticulously assessed the study design, methods, and data handling procedures to ensure strict adherence to ethical guidelines. Notably, the study obtained approval from the aforementioned board prior to the commencement of data collection, thus safeguarding the rights and welfare of the participants. As for ethical considerations, the data we have collected is among the precursors of ethical consideration in peer review. Additionally, all sources are mentioned to ensure credibility. In terms of transparency, since our research paper is theoretical in nature, we employed a specific methodology, which involved collecting data from books and prior articles. We then analyzed this data to identify connections between gene editing and mental health. Subsequently, we modeled the relationship between the CRISPR-Cas9 method and its influence on the IDH1 gene, which is responsible for brain cancer. In the discussion section, we outlined potential cures for these conditions. When it comes to the ethical considerations surrounding gene editing, particularly in the context of human genome editing, have sparked intense debate and led to the formulation of guidelines and regulations. The advent of CRISPR technology, with its potential for precise genetic modifications, has amplified these discussions. One central concern is safety. The risk of unintended off- target effects and mosaicism poses significant challenges. Many experts agree that, until germline genome editing is proven safe through rigorous research, it should not be employed for clinical reproductive purposes. Some argue that existing technologies like preimplantation genetic diagnosis (PGD) and in-vitro fertilization (IVF) offer safer alternatives for preventing genetic diseases. However, exceptions are acknowledged. Germline editing might be justified when both prospective parents carry disease-causing variants or for addressing polygenic disorders. The balance between therapeutic use and potential misuse, such as for non-therapeutic enhancements, remains a subject of ethical debate. Informed consent is another complex issue. Obtaining informed consent for germline therapy is challenging since the affected individuals are embryos and future generations. Nonetheless, proponents argue that parents routinely make decisions affecting their future children, including those related to PGD and IVF. Justice and equity concerns arise as well. Gene editing's accessibility could exacerbate existing healthcare disparities and create genetic privilege. To prevent such outcomes, ethical guidelines and regulations must be established. Regarding genome-editing research involving embryos, moral and religious objections exist, and federal funding restrictions apply in the United States. Nevertheless, some consider such research important for advancing scientific understanding. Research on nonviable embryos and viable embryos under certain conditions has been permitted in some countries, each with its own moral considerations.ii. Study Constrains:
The study constraints of our paper were significant. Owing to limited resources and time constraints, the designated period for data collection was confined to a mere two weeks, with only seven weeks allocated for the entire research paper. This predicament imposed inherent limitations regarding the recruitment of methods and the extent of data analysis that could be undertaken. Furthermore, financial constraints significantly impacted access to advanced tools and equipment. As our research couldn't utilize advanced model applications or laboratory facilities for monitoring the gene editing process, the limitations of resources posed a considerable challenge. Additionally, gene editing methods are relatively new and complex, further exacerbating these challenges.VI. Discussion
i. The Findings:
We initiated our research paper by addressing the challenges associated with gene editing, DNA transcription, translocation, and mental health disorders. We embarked on a comprehensive review of prior articles, focusing on the impact of gene editing on DNA transcription and translocation, as well as its influence on mental health. Our research journey involved delving into review books such as 'Principles of Genetics'ii Thermotical Potential Cure:
In the first step, we will monitor the process to allow us to understand the activity of the CRISPR-Cas9 method on IDH1, using it as an example to simulate these steps. Monitoring by using Bioluminescence Imaging is useful. What makes BLI exceptionally useful is its capacity for real-time tracking and visualization of CRISPR-Cas9- induced changes. It enables researchers not only to observe genetic modifications as they happen but also to quantify their intensity. This real-time aspect can be invaluable for assessing the efficiency and specificity of gene editing processes. Researchers can use BLI to monitor changes over time, gaining insights into the dynamics of gene editing within living organisms. Moreover, BLI is non-invasive, minimizing disruptions to the biological system under investigation, and it can provide longitudinal data, allowing for the assessment of gene-editing persistence. It's an elegant and comprehensive approach for studying the in vivo impacts of CRISPR-Cas9 technology. The process begins by meticulously selecting or designing a bioluminescent reporter that aligns with the IDH1 gene's genomic region of interest. This reporter is thoughtfully constructed to incorporate a gene encoding a light-emitting protein, such as firefly luciferase. The choice of this reporter gene is critical because it will act as a beacon, emitting light in response to any genetic changes initiated by CRISPR-Cas9 within the IDH1 gene. Once the bioluminescent reporter is crafted, it undergoes genetic modification to ensure that it integrates seamlessly into the genomic landscape surrounding the IDH1 gene. This integration is achieved using tailored techniques like viral vectors or direct transfection, ensuring that the reporter becomes an integral part of the IDH1 gene environment. With the bioluminescent reporter now strategically placed within the IDH1 gene's vicinity, the next step involves introducing the CRISPR-Cas9 system, guided by a gRNA molecule, into the target cells. The primary objective is to enable this system to initiate precise double-strand breaks (DSBs) at predetermined sites within the IDH1 gene. These DSBs are strategically chosen to correspond to specific regions of interest within the IDH1 gene, allowing researchers to monitor changes in these regions with precision. The hallmark of BLI's utility in this context lies in its ability to exploit the cellular repair mechanisms, notably the Non-Homologous End Joining (NHEJ) pathway. When DSBs occur within the IDH1 gene, the cellular repair machinery, including NHEJ, springs into action. NHEJ's role is to mend these breaks, but it is known for its potential to introduce errors during the repair process. In the case of the IDH1 gene, NHEJ might inadvertently disrupt the integrated bioluminescent reporter gene, leading to a reduction in bioluminescence. This reduction serves as a real-time indicator of CRISPR-Cas9 activity specifically within the IDH1 gene. It allows to monitor and quantify the impact of gene editing on the IDH1 gene in living organisms, offering invaluable insights into the dynamics of this process and its potential therapeutic application.iii. DNA Ligation:
VII. Conclusion and Recommendation
In this research paper, we have embarked on a comprehensive journey to explore the intricate relationship between gene editing, DNA transcription, translocation, and their impact on mental health in adults aged 18 and above. Our study's primary objectives were to shed light on this underexplored area of research and propose potential therapeutic interventions. We have made several noteworthy contributions, including our focus on the adult demographic, our emphasis on the connection between gene editing and mental health through DNA processes, and our utilization of computational biology for modeling and analysis. Two prominent gene-editing technologies, CRISPR-Cas9 and TALENs, have been discussed in detail. The CRISPR-Cas9 system's precision in directing Cas9 to target DNA sequences and the influence of TALENs on gene editing have been highlighted. We have elucidated the potential errors occurring in the final steps of Non- Homologous End Joining (NHEJ), which can result in gene disruptions and mutations. These disruptions can impact key gene functions related to neurotransmitter regulation, brain development, and mental health conditions. Homology-Directed Repair (HDR), a more accurate but slower repair pathway, has also been explained. The significance of HDR in making precise DNA sequence changes and the potential for errors during the process have been discussed. These errors can lead to functional alterations with direct implications for neural processes and mental health. To address the impact of gene editing on mental health, we have proposed a computational methodology using the Python language and the Biopython library. Our modeling focuses on the CRISPR-Cas9 method's activity on the IDH1 gene, which is responsible for brain cancer. The real-time monitoring of this process through Bioluminescence Imaging (BLI) offers a valuable tool for assessing gene editing efficiency and specificity. Additionally, we have highlighted the importance of DNA ligation in correcting genetic defects of the gene editing . DNA ligases, including DNA ligase I, III, and IV, play a crucial role in sealing nicks in DNA strands. The process of introducing genes encoding ligase enzymes into cells to correct genetic defects has been outlined. Monitoring this correction process by BLI allows for the precise detection of the time of ligation. In conclusion, this research paper has contributed to a deeper understanding of how gene editing processes impact DNA transcription, translocation, and mental health, especially in the context of errors occurring in NHEJ. By utilizing computational biology and innovative techniques like BLI, we aim to propose potential therapeutic interventions for genetic mental health disorders. Our paper has successfully added new information to the literature. Our research significantly contributes to the existing body of knowledge by delving into the intricate relationship between gene editing, DNA transcription, translocation, and their impact on mental health in adults aged 18 and above. Ethical considerations surrounding gene editing have also been discussed, emphasizing the need for safety, informed consent, justice, and equity in genetic research. For further researchers, we recommend: Addressing Limitations: It's crucial for future studies to acknowledge and work within the limitations we have outlined in our research. Limited resources, time constraints, and the complexity of gene editing methods can pose challenges. Researchers should carefully plan their studies, allocate adequate resources, and manage time effectively to overcome these constraints. Exploring Potential Cures: The potential therapeutic interventions we have proposed, such as monitoring gene editing with Bioluminescence Imaging (BLI) and introducing genes encoding ligase enzymes for DNA ligation, should be further investigated. Researchers should conduct in-depth studies to validate the effectiveness of these interventions in correcting genetic defects related to mental health disorders. Utilizing Professional Equipment: To ensure the accuracy and reliability of research findings, it is essential for future researchers to employ professional equipment and state-of-the-art technologies. Two notable examples of such equipment include high-resolution microscopes with live-cell imaging capabilities and advanced gene editing platforms like CRISPR-Cas9 systems. These tools can provide precise data and insights necessary for groundbreaking discoveries in the field. These recommendations encompass the need to overcome limitations, delve deeper into potential cures, and utilize advanced equipment, all of which can collectively contribute to advancing our understanding of gene editing's impact on mental health and the development of effective therapeutic interventions.VIII. Reference
Abstract Despite the latest advancements in the field of astrophysics, dark matter, the vague mysterious thing occupying vast parts of our universe, is still floating around with its obscure nature and very hard to detect components. This research is delving into different approaches for detecting dark matter present in neutron stars and calculating its mass. The approaches discussed are considered secondary techniques in the detection of dark matter and they are being scrutinized with the aim of developing a new way of detecting these vague particles. For instance, hybrid detection methods like star spectroscopy and Gravitational Red Shift Measurements, direct detection methods like scattering experiments, and indirect detection methods like neutrino emissions. Results have shown that some specific neutron stars have higher expectancy for the presence of dark matter particles and some specific techniques are more feasible than others in their detection. By making the most of the latest technology like high-resolution telescopes and software programs, it is possible to utilize dark matter's effects on neutron stars to assist in its detection. We believe that the detection of dark matter and calculating its mass can pave the way for some revolutionary discoveries regarding the hidden intricacies of the universe.
I. Introduction
After years of research, we are still far from comprehending dark matter's properties and identifying its components. Dark matter's enigmatic nature continues to captivate scientists as it constitutes the major portion of the universe. It cannot be seen directly as it consists of weakly interactive particles that do not interact with any form of electromagnetic radiation making its detection extremely challenging. However, its existence is inferred from its gravitational effects on visible matter. "Dark matter is thought to be the glue that holds galaxies together"II. Dark Matter Candidates and Interactions
i. Evidence that supports the existence of dark matter
ii. Dark matter candidates
All the proposed candidates for dark matter are hypothetical particles that have not yet been detected. However, scientists are currently conducting several experiments aimed at detecting these elusive particles. Weakly interacting massive particles (WIMPs): These are hypothetical particles that interact with ordinary matter through the weak nuclear force. They are thought to be a good candidate for dark matter because they have the right mass and properties to explain the observed gravitational effects of dark matter. WIMPs have the right mass and properties to explain the observed gravitational effects of dark matter. Nevertheless, their existence faces several theoretical challenges. Axions: Axions are hypothetical light particles that interact weakly with other particles, which makes them a good candidate for dark matter because they would be difficult to detect, yet still able to explain the observed gravitational effects of dark matter. Sterile neutrinos are another hypothetical particle that is similar to neutrinos, but they do not interact with the weak force. They could potentially explain the observed gravitational effects of dark matter, as well as the observed abundance of light elements in the universe. Nevertheless, their existence is still uncertain, and further research is needed to confirm their existence and properties.III. Redshift: its relationship with dark matter
Redshift is a phenomenon in which the wavelength of electromagnetic radiation is increased as a result of the relative motion of the light source and observer. This increase in wavelength is accompanied by a decrease in frequency and photon energy. It is called "red shift" because when the wavelength gets elongated it is displaced towards the red light on the electromagnetic spectrum.i. Factors causing redshift:
A. Doppler effect
When a light source is moving away from an observer, the light waves are stretched out, making their wavelength longer. This is similar to what happens when a sound source moves away from the listener, the further the sound source gets, the lower the pitch of a sound wave.B. Gravitational redshift
When light travels through a gravitational field, its wavelength is increased as it has to work against the gravitational field to escape which slows down its speed. The stronger the gravitational field, the more the light is stretched and the redder it becomes.ii. Dark matter effect on the gravitational redshift of cosmological bodies
Since dark matter is proved to affect the gravitational field, it is believed that the presence of dark matter as halos around neutron stars can cause significant difference in the amount of the star's redshift, which can consequently assist in calculating the amount of dark matter causing this difference.IV. Neutron Stars: Composition and Properties
i. Formation and Composition
Neutron stars are the remains of massive stars that died in what is known as a supernova. The initial star's mass is between 10 and 25 solar masses. When these stars run out of fuel, they collapse under their own gravity and form neutron stars. Their outer layer is a solid crust formed from normal matter. It can reach up to 10 kilometers in thickness and it is made up of ions and electrons. The internal part is made up of neutrons, electrons, and protons. The neutrons are in the form of gas, packed tightly forming a degenerate fermion gas. Protons and electrons are also degenerate fermion gas but are less dense than neutrons. The core of these stars may contain exotic matter like quark matter and strange matter. However, information about the core is yet to be verified.ii. Properties
iii. Types of Neutron stars
Neutron stars have multiple common properties such as the ones mentioned above. However, they can be distributed into groups, each having special properties.property | Magnetars | Pulsars | X-ray binaries |
---|---|---|---|
Magnetic Field | Extremely strong (1014- 1015 gauss) | Strong (108-1012 gauss) | Variable, depending on the type of binary system |
Rotation period | Varies from milliseconds to seconds | Varies from milliseconds to seconds | Variable, depending on the type of binary system |
Power source | Magnetic field | Rotational energy | Accretion of material from the companion star |
X-ray emission | Yes, persistent and variable | Yes, pulsed | Yes, pulsed |
Gamma-ray emission | Yes, during flares | no | sometimes |
iv. Advantages of Binary Neutron Star Systems for Research:
The stars in binary systems are the most feasible type of neutron stars for research and study. Firstly, they are brighter as they reflect light off of each other which makes them easier to study with telescopes. Binary stars are more stable as they are held together by their mutual gravitational attraction. On the other hand, single stars can be affected by the gravitational pull of other stars in their vicinity, which can make it difficult to study them.v. Mass limit of Neutron stars
This is the maximum mass a neutron star can possess before collapsing under its own gravity into a black hole. This happens because the strong nuclear force is not strong enough to hold the neutrons together against the immense gravity of the star. The limit is called the Tolman— Oppenheimer—Volkoff limit (TOV limit), and it is estimated to be around 2.16 solar masses. It is determined by the equation of state of neutron matter, which is the relationship between the pressure and density of neutron matter. However, the TOV limit is not 100% certain. For instance, PSR J0740+6620 is a neutron star that has a mass of about 2.14 solar masses. This immense mass suggests that the TOV limit may be slightly higher than 2.16 solar masses.V. Potential interactions between dark matter and neutron stars
It is scientifically feasible to predict the presence of dark matter in the cores of neutron stars since it does not interact with electromagnetic radiation and can therefore flow into the dense matter of these stars without being detected. Recent observations have revealed that old neutron stars are experiencing a notable increase in temperature to near-infrared ones. This phenomenon has led scientists to speculate that dark matter may be penetrating the dense matter of these stars and causing changes in their properties. This flow of dark matter has detectable effects on the surface of these neutron stars, particularly those located in dark-matter-rich regions like the Galactic center or cores of globular clusters.i. The density of neutron stars
Neutron stars are known for their incredibly dense nature. This density causes a critical amount of spacetime curvature, leading to a powerful gravitational field. Due to this strong gravitational field, it is more likely for weakly interacting dark matter particles to be captured. Therefore, it is scientifically feasible to predict the presence of dark matter in the cores of neutron stars. "Because of their strong gravitational field, neutron stars capture weakly interacting dark matter particles (WIMPs) more efficiently compared to other stars, including the white dwarfs. Once captured, the WIMPs sink to the neutron star center and annihilate."ii. Scattering off the neutrons in the star.
Dark matter can enter neutron stars by scattering off the neutrons in the star. This is because dark matter is thought to be made up of weakly interacting massive particles (WIMPs), which can interact with neutrons through the weak nuclear force. When a WIMP scatters off a neutron, this would cause the WIMPs to lose energy and eventually be captured by the neutron star. Furthermore, it can transfer some of its momentum to the neutron, causing the neutron to move. Scattering of neutrons in the star can have several effects that are easily detected, including the heating of the neutron star, the change in its rotation rate, the emission of gravitational waves, and the production of neutrinos. Ongoing research was conducted with the aim of studying the scattering of neutron stars with dark matter. Firstly, The European Pulsar Timing Array (EPTA), is a network of radio telescopes used to measure the arrival times of radio waves from pulsars. The EPTA studied the timing of these waves so that scientists could look for changes in the rotation rate of pulsars that could be caused by the scattering of dark matter. Secondly, The Neutron Star Interior Composition Explorer (NICER) is a NASA satellite that is used to study the interior of neutron stars. NICER can measure the temperature and composition of neutron stars, which could help scientists to determine if dark matter is present in these stars. Thirdly, The Large Synoptic Survey Telescope (LSST). This is a terrestrial telescope in Chile that will be used to survey the entire sky every few nights. This will allow scientists to search for neutron stars that are being heated or slowed down by the scattering of dark matter.VI. Neutron Star: Structure and Equation of State
The neutron star structure equation of state is a critical component in understanding the internal composition and properties of neutron stars and how dark matter affects them. This equation of state describes how the pressure and energy density of matter within a neutron star depend on its density. In essence, it provides insights into how matter behaves under the extreme conditions present within these stellar remnants. The equation of state for neutron stars is particularly relevant when considering the detection of dark matter within them. Dark matter, an elusive form of matter that doesn't emit light, might accumulate within neutron stars due to its gravitational interactions. This accumulation could influence the star's internal structure, including its density distribution and pressure profile. If dark matter were to accumulate within neutron stars, it might affect the equation of state by altering the relationship between pressure, energy density, and density itself. This, in turn, could lead to observable consequences in terms of the star's properties, such as its mass-radius relationship or the behavior of emitted radiation. The NS equation of state (EoS) relates the pressure, P, to other fundamental parameters. With the sole exception of the outermost layers (a few meters thick) of an NS and newly born NSs, the pressure in the strongly degenerate matter is independent of the temperature. Then, the microphysics governing particle interactions across different layers of an NS is encapsulated in one-parameter EoS, P = P (ρ), where P and ρ are pressure and density, respectively. Calculations of the EoS are frequently reported in tabular form in terms of the baryon number density, nb, i.e. P = P (nb), ρ = ρ(nb). The EoS is the key ingredient for NS structure calculations; its precise determination, however, is an open problem in nuclear astrophysics and is limited by our understanding of the behavior of nuclear forces in such extreme conditionsVII. Detection Methods of Dark Matter in Neutron Stars
Dark matter particle interaction with earth detectors happens once per year making their study challenging. However, dense objects such as neutron stars are posed as good targets to probe and study dark matter due to their high mass density and compactness. In this section, we analyze the different techniques and methods used in the detection of dark matter in neutron stars and their constraints.i. Indirect Detection Methods
A. Neutrino Emissions
"The detection of these neutrinos will be complementary to the accelerator- and reactor- based experiments that study neutrinos over the same energy range"B. Gamma Ray Emissions
Gamma rays are Astro particles with some unique properties making them an excellent choice for indirect searches for WIMP dark matter. Gamma rays are characterized by the ability to travel to the observer without deflection, allowing the mapping of the sources of the signal, and the prompt emission carrying important spectral information that can be used to characterize the dark matter particle in the case of detectionii. Hybrid Detection Methods
A. Star Spectroscopy
ii. Direct Detection Methods
Direct detection methods of dark matter on Earth rely on the scattering of dark-matter particles from the halo of the Milky Way in a detector on Earth. The latter is usually set up deep underground as "Sanford in South Dakota"A. Scattering Experiments
Scattering is one of the easiest and most used techniques in the detection of dark matter. For dark matter in the GeV mass range (neutron stars), it is possible to impact particles and scatter off of them. This method of detection is limited by the cross- section of the target material used and by the excitation energy of that material. For light-dark matter, elastic collisions can generate nuclear recoils inside of the target's crystal lattice. The energy of the recoil is given by: (3) where mN is the mass of the nucleus, q ~ vmDM is the momentum transferred, and v ≈ 10-3c is the DM velocityVIII. Constraints and Applicability to Neutron Stars
The distinctive nature of neutron stars engenders both challenges and opportunities for dark matter detection methods:i. Neutrino Emissions
The dense interiors of neutron stars could augment neutrino emissions from potential dark matter interactions. However, disentangling these emissions from the myriad neutrino sources inherent to neutron stars is an intricate endeavorii. Gamma-ray Emissions
The magnetic fields of neutron stars could amplify gamma-ray signals originating from dark matter interactions. Yet, deciphering these signals from the diverse ensemble of gamma-ray sources is a complex puzzle.iii. Scattering Experiments
While terrestrial scattering experiments are well- established, their adaptation to the extreme conditions within neutron stars remains uncertainiv. Gravitational Red Shift Measurements
Gravitational redshift measurements offer a tantalizing approach to uncovering dark matter in neutron stars. However, the intricate interplay between various neutron star properties and dark matter interactions presents a challengeParticle | Experiments | Advantages | Challenges |
---|---|---|---|
Gamma ray photons | Fermi-LAT, GAMMA-400 | point back to source spectral signatures | backgrounds, attenuation |
Neutrinos | ICE Cube/ Deep core | point back to source spectral signatures | backgrounds, low statistics |
Cosmic Rays | PAMELA, CTA, LAT | low backgrounds for antimatter searches | diffusion, do not point back to source |
IX. New Suggested Detection Approach
i. Gravitational Redshift Measurements
The gravitational field of neutron stars engenders a unique environment that might unveil the presence of dark matter. Accumulation of dark matter within these stars could alter their gravitational field, thereby affecting the observed redshift of emitted radiation. A discernible shift in the emitted spectrum, as compared to theoretical expectations, could offer a tantalizing glimpse into dark matter interactions. Noteworthy explorations of gravitational redshift in neutron star spectra are explained in the works of Tang et al.X. Discussion
The findings presented in this paper provide a comprehensive overview of interactions between dark matter and neutron stars, and the primary approaches used to detect it. In addition, it also highlights the challenges facing these approaches. Moreover, a novel detection method is introduced utilizing the gravitational redshift measurements to detect the presence of dark matter; however, due to a lack of precise data and instrument limitations, this finding should be interpreted with caution. Therefore, in-depth analysis of the preliminary redshift measurements should be conducted on real- life neutron star examples whenever the data is available. Notability these analyses are recommended to be conducted on neutron stars of binary systems for their numerous advantages that have been discussed above. In summary, this paper not only provides a comprehensive synthesis of knowledge regarding the intricate relationship between dark matter and neutron stars but also introduces a cutting-edge detection method. While the gravitational redshift approach holds promise, it is vital to acknowledge its limitations and invest in the meticulous analysis of real-world data to further our understanding of this complex interplay.XI. Conclusion
The enigma of dark matter continues to captivate scientists while navigating its elusive realm constituting a significant part of our universe. As neutron stars serve as potential sites for dark matter interaction, our pursuit encompasses direct and indirect dark matter detection methods like neutrino emissions and gamma-ray signatures. Yet, existing approaches face significant challenges. Therefore, our novel approach delves more into the intricate task of unveiling dark matter within neutron stars, leveraging advanced technology and innovative methods, concluding that the presence of dark matter in neutron stars is detectable through the utilization of gravitational redshift measurements as they offer a distinct perspective inferring the masses of INS using the EoS sets predicted by GW data, and nuclear experiments. Which results in a difference between the expected mass of INS and the mass calculated through gravitational redshift measurements. However, due to a lack of precise measurements and instrument limitations, applying these equations remains unfeasible. So, further research and advancements are required to overcome these obstacles and realize its full potential.XII. Reference
Abstract The presented project is an enhanced version compared to existing solutions available for individuals who are blind and deaf. Firstly, the Braille TTY device, priced at $6,560, allows blind-deaf individuals to answer calls through writing and screen display. Secondly, the Orbit Reader, priced at $750, is a braille display device that connects to electronic devices, enabling writing and reading for this population. Lastly, the My Vox device facilitates communication between blind, deaf, deaf-blind, and unimpaired individuals, incorporating two keyboards, a braille display, and a regular screen. To develop the improved project, an Arduino device is utilized to control the project's components. Blind-deaf individuals primarily learn through kinesthetic learning and tactile experiences, utilizing tactile sign language, tracking, tactile fingerspelling, print on palm, and Braille. The impact of this project on the community is significant, as it empowers individuals with blind-deaf disabilities to lead more normal lives, addressing their needs instead of ignoring them. Our system has a vibration motor in each position. When a name is sent to the gloves, its letters are translated to braille language like the right side, and some vibration motors are turned according to the letter. All the vibration motors are turned when there is an object near. The prototype can detect the object by using a smartphone. The prototype of the project has been successfully tested in four systems: writing, reading, obstacle avoidance, and obstacle detection. The research methodology depends on a simulating survey and measuring the accuracy and time.
I. Introduction
Human contact depends heavily on communication, which also has a big impact on how we conduct our daily lives. For normal people, it is possible to network with others, exchange ideas, and gain knowledge from one another. However, for those who have disabilities like blindness or deafness, communication can be very challenging. Communication access presents special difficulties for blind people, which might limit their quality of life and social participation. Their freedom and wellbeing can be increased by learning how to communicate with and listen to others. People with that case need more communication, social isolation, and reduced opportunities in education and employment. Approximately 0.2% of the global population suffers from severe deaf blindness, while 2% experience moderate deaf blindness. There are estimated to be over 15 million people with severe deaf blindness worldwide, just like the population of Norway and Sweden combinedII. Literature Review
III. Proposed Model
i. approach and tools/techniques:
The project is a device consisting of four systems. Firstly, the project depends on braille language, which means every letter consists mainly of six units.ii. overview of system modules
IV. Methods
i. the prototype method.
ii. Research Methodology:
The research methodology for the project is adjusted for making a practical project. The research concern is to make a project with three significant features — usable, considerable, well-price. Depending on these criteria, the project results will mainly focus on the qualitative results — achieved, or not achieved. Some will be used as quantitative results: describing the AI model efficiency. The methodology of the research will be divided into two parts. The first part is testing the code validity, components status, and the accuracy of the model. The second part is the survey for the user experience with the project, importantly for being practical. The main challenge of the project is the tested sample. The experience of finding the blind-deaf and to teach him how the device will work needs a long- term plan, extended for years. The sample will be normal people, but the difference is preventing their senses from work for the experiment.V. Results
i. Survery
The aim of the survey is to simulate the experience of the device to many people. The number of responses is 14. The required information of the survey were:ii. quantitive test:
The prototype was tested for these features: 1- Object detection:Object name | Model prediction | Model effictiveness |
---|---|---|
mouse | mouse | 50% |
pen | Toothpaste | |
chair | chair | |
lamp | Refrigerator | |
person | person | |
calculator | cellphone | |
Laptop | Laptop | |
couch | ||
bag | person | |
window | oven |
sentence length (in letters) | Time to receive | Time to send another |
---|---|---|
7 | 42 sec | 20 sec |
20 | 2 min | 1 min 39 sec |
12 | 1 min 12 sec | 44 sec |
Average per letter | 6 sec | 4.87 sec |
Distance (cm) | Measured distance |
---|---|
20 | 23 (15% increase) |
40 | 40 (0% decrease) |
52 | 56 (9.6% increase) |
35 | 36 (2.9% increase) |
50 | 53 (6% increase) |
Result/real | 106.7% |
VI. Discussion and Future plan
Depending on results, analyses and future plans have been made. Although most people were satisfied with smartphones, it was harder to do. Smartphones need tiny components and complex programming. On the other hand, the gloves — the second option in voting — were easier to do and easy to lift.i. reading section of the prototype:
ii. writing section of the prototype shown in figure (15):
The average time per letter is 4.87 seconds, which varies depending on the experience with the system. A normal sentence needs about 1 minute 10 seconds, which considers acceptable time, being able to improve with more practice.iii. obstacle avoidance system:
The ultrasonic sensor worked well, but the problem is that the wires distracted its function, so when wearing the gloves, the ultrasonic sensor should be attached to the arm below or it can be connected to a mobile device on the chest or the shoes. 50 cm or less is detected accurately, so the blind- deaf can feel the vibration if something is close to him.v. obstacle detection system:
The results show that the model has a 50% accuracy, which is unacceptable. It can only detect basic elements. However, the connection to the application shown in figure (16) is successful and fast. To solve this problem in the future, another model (Yolov8) shown in figure(17) was tested. It showed a range of accuracy from 70 to 90 percent. The problem is to insert it into the application. That model could be a subsequent solution. In conclusion, the future plans of the project:VII. Conclusion
The main objective of this project is to minimize obstacles faced by individuals who are blind and deaf, aiming to enhance their communication abilities. One notable aspect of this project is its cost-effectiveness, making it a more affordable solution compared to other alternatives. The device's framework primarily consists of press buttons, vibration engines, and ultrasonic sensors. By utilizing this combination, the device enables deaf and blind individuals to communicate more effectively. The vibration engines play a crucial role in allowing users to read what they type on their smartphones. The words typed on the keypad are then displayed on an LCD screen, providing visual feedback. To further enhance usability, potential obstacles can be detected and avoided using an ultrasonic module. The significance of this technology extends beyond its immediate benefits, as it has the potential to positively impact the lives of millions of blind and deaf individuals, empowering them to move more freely and communicate with greater ease.IX. Acknowledgement
We would want to sincerely thank Allah for His mercies and direction during our academic path. Additionally, we are incredibly appreciative of our parents' unwavering support, love, and encouragement. We would also want to thank Abdelrahman Abdel Aleem, Our research instructor, Dr. Ahmed Abdel Aleem, our project mentor, and YJS for their tremendous advice and assistance. In addition, we want to thank Ahmed Alaa, who is an important part of this project, and unfortunately, he couldn't join us for the research. Finally, we would like to express our gratitude to everyone who contributed in any manner to our research.X. Reference
Abstract Current office buildings in temperate climate zones may suffer from inefficient energy utilization that does not match the workers' demands. This research was inclined to detect the primary environmental need and determine energy-related metrics that help to sustain those environmental conditions. The study contained both qualitative and quantitative methods presented from other case studies due to the limitations we had. One of quantitative methods included surveys that helped us to comprehend the favorable environmental need to address from office occupants: indoor air quality. Another quantitative method involved analyzing energy records that served as a tool to define energy-efficiency metrics. Contrary, qualitative methods like interviews from experts were beneficial for understanding the specific problem with air quality systems, particularly regarding HVAC. To address the needs of refining air quality, we particularly found energy-efficient performance parameters-correlating metrics - containing such as HVAC consumption, outdoor temperature, and occupancy - and control systems strategies for air quality testing: BMS, VAV, and VFD. Altogether, this research found and promoted energy efficiency with occupants' needs in temperate climate office buildings, focusing on air quality improvement through refined control systems and performance metrics. Further research can potentially be conducted in office buildings located in more harsh climate zones. Moreover, due to research constraints, we were unable to conduct on-site visits to office buildings. Therefore, our future actions will focus on conducting in-person interviews during site visits to gain deeper insights into environmental requirements. We will also make efforts to obtain energy consumption records from the building administrations.
a) Keywords: environmental needs, indoor air quality, energy consumption, performance metrics, energy management b) Abbreviations:I. Introduction
Climate change is a globally recognized problem and is known for its existence for a long period of time. Extensive efforts were and are being exerted to find effective solutions to this problem. One of the most contributing sides to this problem is energy consumption, as to cover the massive energy demands the world relies on non-renewable energy resources, which produce large amounts of greenhouse gases like Carbon Dioxide, methane, and nitrous oxide. Therefore, constructing sustainable and energy-efficient buildings has become essential to meet the large energy demands of today's world. These buildings are known as green buildings, they are known for their sustainability and energy independence, as they use renewable resources to cover a large percentage of energy needs or even their whole demand. The purpose of this study is to dive deeper into the evaluation of green building projects, which are known as energy efficiency benchmarks, specifically for office buildings located in temperature climate zones, as these types of buildings consume tremendous amounts of energy and are located broadly over the globe. Among commercial buildings, office buildings have the largest number and have the highest total energy consumption (about 14% of the energy consumed by all commercial buildings)II. Literature review
This literature review aims to explore the extensive field of benchmarks and measures related to energy efficiency. The primary focus is on how these benchmarks and metrics are applied to evaluate the effectiveness of green building initiatives in office buildings located in tropical climates. By analyzing diverse scholarly sources, this review aims to provide a comprehensive understanding of viewpoints, approaches, and terminology related to energy efficiency assessment. Certain researchers have made significant contributions in this area by proposing blueprints for the construction of sustainable, energy-efficient structuresIII. Methods
The results reported in this research paper were conducted using both qualitative and quantitative methods for a stronger and more integral approach to the required results with a net persuasive conclusion for the entire procedure, so this research is basically mixed- method research.i. Quantitative methods
The quantitative methods were conducted mainly using two techniques: Surveys and energy records. The surveys were used to configure the most impactful environmental aspect for the occupants that they wish to be more eco-friendly and energy-efficient, this aspect also contributed to occupants' comfort inside the building. The type of surveys used in our study is called post- occupancy evaluation (POE) surveys which basically obtain feedback on a building's performance based on surveys, spanning the amount of approximately 300 participants. These surveys (no access to background data or any other methodological data) were conducted by previous scholars in two office buildings in Australia, specifically in Sydney, which is a country known for its temperate temperature. Implementing surveys was a crucial step in our study because identifying the workers' environmental preferences and improving them will result in green and energy-efficient institutes in addition to increasing the workers' comfort level inside their working space, which will result in a more productive community. The energy records used in this research were made by scholars from Texas through observing the energy consumption of office buildings during a period of 24 hours; the observation records were analyzed using machine learning and algorithms techniques and used to build up graphs on three attributes: HVAC consumption, occupancy of the building, and outside temperature. These graphs shed light on some issues that can be potentially fixed for better energy performance metrics of these buildings.ii. Qualitative methods
The qualitative methods used in this research are only in the form of interviews done with experts due to the ecological consequences of inefficient energy consumption regarding over- specification in cooling systems, which clarified the causes of these issues. The information collected from experts in these interviews helped eventually in the success of green buildings.IV. Results
Scaling Method | Description | Mathematical Formula |
---|---|---|
Min-Max Scaler | Used to normalize data in the range of [0, 1] For each value in the feature, the minimum value is subtracted and then divided by difference between the original maximum and originial minimum |
$\frac{\chi - min}{max-min}$ |
Standard Scaler | Used to rescale the distribution of the data by subtracting the mean and then dividing by the standard deviation [43] | $\frac{\chi - \mu}{\sigma}$ |
Robust Scaler | Primary used to remove the effect of outliers as the centering and scaling of this scaler are based on percentiles [44] | $\frac{x-Q_1}{Q_3 - Q_1}$ |
Clusters Content | Min Max Scaler | Standard Scaler | Robust Scaler | ||||||
---|---|---|---|---|---|---|---|---|---|
HVAC kWh | Outdoor Temperature (F) | N_Users | HVAC kWh | Outdoor Temperature (F) | N_Users | HVAC kWh | Outdoor Temperature (F) | N_Users | |
Cluster 0 | 2.92041 | 79.321325 | 0.79497 | 1.98926 | 79.047934 | 0.625069 | 13.87707 | 83.121461 | 15.52785 |
Clutser 1 | 7.96495 | 74.409836 | 20.4754 | 8.98954 | 76.07863 | 18.82016 | 1.774806 | 70.057293 | 0.747598 |
Cluster 2 | 2.35077 | 55.230648 | 0.8503 | 2.39227 | 54.342054 | 1.759859 | 5.714216 | 66.734021 | 17.44433 |
V. Discussion
The primary goal for this research paper was to find the essential environmental needs and determine the energy-related metrics that are most efficient for the environment, these goals were described in detail earlier in the abstract and introduction, The objectives of this research were fulfilled using both qualitative (Experts' interviews) and quantitative (Surveys and energy records) approaches. These data focused on the needs of the building occupants when it comes to their environment and their environmental beliefs (POE Surveys), the energy consumption of the building based on three attributes: HVAC consumption, occupancy, and the outdoor temperature (Energy records), and how to fix problems related to the cooling system of the building (Expert's Interviews). After gathering these data and analyzing them, it was found that the most important environmental factor to the workers is the indoor air quality Also the energy records showed the lack of controlling systems within office buildings. These findings showed solutions to the objectives of our study like improving the indoor air quality using natural ventilation systems, using efficient control systems for the HVAC consumption so that the building could respond to the extensive load on specific day periods and solutions to the problems related to the cooling systems. The quantitative data used in this paper (Surveys and energy records) were analyzed using descriptive statistics, which is a way of describing a dataset statistically. The findings of this research will contribute significantly to the development of green buildings as the data was collected from different sources and analyzed to come up with efficient solutions that will shorten the way for engineers to build a green office building that is totally green and totally self-dependent in energy consumption. During the process of conducting this research, we weren't able to access the experimental field, so we had to rely on datasets collected by other scholars, this obviously was a significant limitation to our work as it might have affected the accuracy of our results with a small percentage, it's also unlikely for any future obstacles to face our study as green buildings is increasing wildly and similar topics related to green buildings is currently being researched by many other scholars. For efforts willing to continue the work done in this study we would recommend having a larger set of data for better precision in the results, in addition, having access to experimental fields and on-site location would contribute greatly to the accuracy of the results.VI. Conclusion
Throughout the work, the research played an integral part in identifying energy consumption metrics with two purposes: performance-measuring, managing. It determined specific performance metrics, such as linking HVAC consumption, outdoor temperature, and building occupancy, as well as effective control system strategies like VAV, VFD, and BMS. These were utilized to optimize energy utilization in office buildings, aligning with occupants' primary environmental needs. Throughout the study, energy records taken into consideration by other scholars were handy in defining those metrics. Although the initial anticipation was to formulate highly specific HVAC usage metrics, the study revealed that external factors facilitate a vast role in shaping the ultimate metrics. Having said that, as we align with occupants' ecological preferences, it became essential to establish metrics for indoor air quality, considering that many occupants prioritize it as a primary concern. This step had not met any previous expectations prior to the onset of this research. All in all, the findings from this research were successful to bridge the gap between pointing out momentous energy-related metrics that primarily address the occupants' needs regarding indoor air quality in temperate office buildings located in temperate climate zones (like Malaysia, Australia, and Southern United States). The further research can potentially investigate other environmental needs of occupants: for example, both eco-friendly and energy-efficient lighting. On top of that, there is potential for further research in more challenging climate conditions, such as extreme cold or high humidity environments.VII. Acknowledgement
First, we would like to praise God for helping us and guiding us during this journey to yield this research paper. Secondly, we would like to give our huge thanks and gratitude to our biggest supporters for their invaluable help, as without their support this work would never have been possible: First, to Ziad Ahmed, our respectful and patient mentor; his feedback was the biggest contributor to our paper, and he was available all the time for our urgent inquiries. Second, to the Youth Science Journal management board, they were our number one supporter throughout this entire journey; they helped us with our biggest challenges and provided us with useful materials that clarified many questionable things in our research. Third, to the researchers who conducted the papers mentioned in the references list; these scholarly resources were our main source of information. Finally, to our family and friends for giving us emotional support and keeping us motivated during this process.VIII. Tables & Figures
Scaling Method | Description | Mathematical Formula |
---|---|---|
Min-Max Scaler | Used to normalize data in the range of [0, 1] For each value in the feature, the minimum value is subtracted and then divided by difference between the original maximum and originial minimum |
$\frac{\chi - min}{max-min}$ |
Standard Scaler | Used to rescale the distribution of the data by subtracting the mean and then dividing by the standard deviation [43] | $\frac{\chi - \mu}{\sigma}$ |
Robust Scaler | Primary used to remove the effect of outliers as the centering and scaling of this scaler are based on percentiles [44] | $\frac{x-Q_1}{Q_3 - Q_1}$ |
Clusters Content | Min Max Scaler | Standard Scaler | Robust Scaler | ||||||
---|---|---|---|---|---|---|---|---|---|
HVAC kWh | Outdoor Temperature (F) | N_Users | HVAC kWh | Outdoor Temperature (F) | N_Users | HVAC kWh | Outdoor Temperature (F) | N_Users | |
Cluster 0 | 2.92041 | 79.321325 | 0.79497 | 1.98926 | 79.047934 | 0.625069 | 13.87707 | 83.121461 | 15.52785 |
Clutser 1 | 7.96495 | 74.409836 | 20.4754 | 8.98954 | 76.07863 | 18.82016 | 1.774806 | 70.057293 | 0.747598 |
Cluster 2 | 2.35077 | 55.230648 | 0.8503 | 2.39227 | 54.342054 | 1.759859 | 5.714216 | 66.734021 | 17.44433 |
XI. References
Abstract
This project is dedicated to the study of how China affected the infringement of the Uighur Nation, so that countries do not provide any help for this nation. Moreover, it explores which pressure China puts on the Uighur nation
and how international organizations, countries impact this issue. This topic significantly impacts on the development of the Uighur nation, due to this there will be opportunities to identify problems and begin to solve them in
the world market. Such questionnaires were completed by millions of people, who have shared their knowledge and experience.
This scientific inquiry sheds light on the manipulation of media, diplomacy, and international relations, unveiling tactics that other governments might employ in the future. Furthermore, this research empowers
international bodies, NGOs, and civil society to develop more robust mechanisms for safeguarding human rights. Armed with an in-depth understanding of how a nation effectively diverted attention, stakeholders can formulate
proactive strategies to uphold the rights of marginalized populations and hold responsible parties accountable.
I. Introduction
The article discusses how China's manipulation of media, diplomacy, and international relations affects the development of the Uighur nation. It sheds light on the methods used by the Chinese government to divert the attention of the world community from human rights violations in Xinjiang. The Uighur people were subjected to forced labori. The causes of Genocide Problem in the world.
This study starts with a description of the Chinese and Uighur people, then it will be moved on to the sanctification of the Uighur problem itself - how this problem arose and what are its causes. In order to understand the topic presented at the present moment, it describes the current state of the Chinese government's policy towards the Uighur people. The Chinese government is a very authoritarian government and seeks to subjugate the opposition in various ways. The Chinese government's methods apply only to Uighurs and could be to all money groups demanding liberalization or democratization in the Chinese stateii. The importance of research.
In the conclusion of the research work, there will be solutions of this problem, which states that Uighur genocide requires international cooperation and diplomatic efforts. By disseminating accurate information, human rights groups and international organizations can help the international community understand the seriousness of the situation and exert public pressure to take action. It is important to advocate transparency, dialogue and compliance with international law in order to effectively resolve the ongoing crisisII. Habitation of the Uyghur Nation now
As of now, the Uyghur population continues to reside in the Xinjiang Uyghur Autonomous Region in northwest China, where the majority of Uyghurs are concentrated. However, it is important to note that the situation in Xinjiang has dramatically changed in recent years due to the Chinese government's policies aimed at assimilation, surveillance, and mass detention of Uyghursi. Consequences of pressure on the Uyghur nation
The Chinese government's assimilation policies have targeted Uyghur culture, language, and religious practices. Uyghurs are pressured to conform to Han Chinese norms, eroding their distinct cultural identity. This suppression of cultural expression can have long-lasting effects on the Uyghur nation's cultural heritage and the sense of belonging among Uyghur individualsii. China's governmental system
The Communist Party of China is the ruling political party in China. It holds ultimate power and authority in the country. The party's General Secretary is considered the most powerful position in the country. China's governmental system is highly centralized, with decision-making concentrated in the hands of top Communist Party leaders. Power is exercised through a hierarchical system, with guidance and directives flowing from the central government to provincial, municipal, and local levels. China's governmental system follows the principle of socialism with Chinese characteristics.This refers to a blend of socialist ideology and market-oriented economic reforms, combining central planning with elements of market competition and private ownership.iii. Organizations fighting for Uyghur Nation's rights
Nowadays, China, having good political relations with other countries, makes the Uighur nation separated from international help. China bears state responsibility for breaching every article of the 1948 Genocide Convention in their treatment of the Uighur people of Xinjiang provinceIV. Other countries' reaction on that issue
The U.N. The Human Rights Council, individual countries, and international organizations have been putting pressure on China over Xinjiang and calling on Beijing to allow U.N. inspectors into the region to investigatei. International countries' policy
China has detained Uighurs at camps in the north-west region of Xinjiang, where allegations of torture, forced labour and sexual abuse have emerged. The sanctions were introduced as a coordinated effort by the European Union, UK, US and Canada. China responded with its own sanctions on European officialsii. Methods of exploring research
First, a comprehensive literature review to analyze existing scholarly works, media reports, and policy documents on the topic were employed. This provides a foundational understanding of key narratives and strategies. Secondly, qualitative content analysis of international media coverage and diplomatic statements to discern patterns in framing were conducted. By identifying linguistic nuances, research aims to uncover how China shapes the narrative surrounding the Uighur issue. Thirdly, interviews with experts in international relations and communication will offer insights into China's diplomatic and public relations strategies. This qualitative data will be triangulated with media analysis to enhance our understanding. Additionally, social media analysis and sentiment tracking will help gauge public engagement. Ethical considerations will guide our research, ensuring a balanced and unbiased approach. By triangulating information from various sources, research aims to uncover the mechanisms employed by China to divert attention and offer a nuanced understanding of the complex issueiii. Actions of China, which divert the attention of other countries
China's significant economic influence around the world, particularly through trade and investment, can often divert the attention of other countries. Many countries have economic ties with China and may be reluctant to criticize or confront China on certain issues in order to protect economic interestsiv. China's contribution to the Uighur nation
China has invested in infrastructure and economic development projects in Xinjiang, which can bring benefits to the Uighur population in terms of job opportunities, improved access to resources, and economic growth. However, these developments have also been criticized for displacing Uighur communities, marginalizing Uighur businesses, and benefiting primarily Han Chinese settlers rather than the Uighur population. It is crucial to approach the question of China's contribution to the Uighur nation with a critical lens and sensitivity to the ongoing human rights crisis. While there may be certain aspects that have brought benefits to some individuals or communities, they should not overshadow the severe violations being committed against the Uighur population and the urgent need for international attention, accountability, and support for the victims.V. Conclusion
The current status quo between the Uyghur and Chinese nations is marked by significant tensions and human rights concerns. The Chinese government's policies in the Xinjiang region, where the majority of Uyghurs reside, have led to widespread human rights abuses and raised serious international concerns. Reports indicate that over a million Uyghurs and other The international community should continue to exert diplomatic and economic pressure on China to address the Uyghur genocide. This can involve sanctions, trade restrictions, and diplomatic efforts to hold the Chinese government accountable for its actions. Independent international bodies, such as the United Nations or other reputable organizations, should conduct thorough and unbiased investigations into the human rights abuses in XinjiangAbstract This research explores the impact of the superconductor LK-99 on the maintenance and operational costs of quantum computers. Quantum computers have shown great potential in revolutionizing computing by leveraging the principles of quantum mechanics to achieve exceptional computational power. However, the fragility of qubits - the fundamental units of quantum computing - poses significant challenges in terms of errors and decoherence. Efficiently managing quantum errors requires complex error correction techniques and the cooling of qubits to ultra-low temperatures. The concept of room-temperature superconductivity, exemplified by LK-99, offers a transformative solution by potentially reducing the cooling requirements of quantum hardware. This breakthrough could enhance the stability and viability of quantum devices for widespread utilization. This paper investigates the potential benefits of LK-99 in quantum computer operations, highlighting their role in mitigating thermal noise and improving qubit stability. The findings provide valuable insights into the practical implications of integrating superconductors in quantum computing systems, paving the way for more efficient and cost-effective utilization of these powerful machines.
I. Introduction
Supercomputers have brought about a big change by effectively addressing complex scientific challenges that were previously obstacles for researchers. Their remarkable capacity to expediently execute Complicated algorithms, compressing the timeline of tasks from weeks to minutes compared to classical computers, and their ability to simulate atomic-level behaviors of molecules have played fundamental roles in scientific advancements. Even with all the potential they hold, supercomputers have their limitations when compared to the enormous potential of quantum computers, which are still in development but show great promise. The potential of quantum computers becomes conspicuously evident through their exceptional accomplishments. Google's Sycamore, a quantum computer boasting 53 qubits—the foundational units of quantum computing—successfully tackled the formidable BosonSampling problem within a mere 200 seconds. In stark contrast, conventional computers would require billions of years to achieve the same feat. Another noteworthy instance is the Fugaku quantum computer at the University of Tokyo, which adeptly emulated the intricate process of protein folding, a pivotal pursuit in drug development. Additionally, the University of Maryland employed quantum computers to replicate the behaviors of Ising model quantum magnets, thereby unveiling insights into quantum phenomena. These instances underscore the precision and intricacy of quantum simulations, underscoring the profound influence of quantum computing by yielding groundbreaking advancements. These quantum milestones are made possible by leveraging the tenets of quantum mechanics, particularly "superposition" and "quantum entanglement." Superposition engenders data parallelism, where quantum bits (qubits) exist in multiple states simultaneously, empowering quantum computers to navigate an extensive array of possibilities within a single computation. This capability significantly accelerates specific problem-solving scenarios. Furthermore, the phenomenon of quantum entanglement permits interconnections among qubits, enabling the instantaneous influence of one qubit's state on another, even across significant distances. This property empowers quantum computers to execute intricate operations and simulations that surpass the capabilities of classical computers. Quantum computers distinguish themselves from their supercomputer counterparts through various pivotal disparities. Foremost is the fundamental distinction in their data storage mechanisms. Supercomputers utilize "classical bits," while quantum computers use "qubits." This distinctive attribute empowers quantum computers to simultaneously explore an array of outcomes, thus setting them apart in computational efficacy. Furthermore, quantum computers leverage specialized tools known as "quantum gates," a departure from the conventional "logical gates" in supercomputers. These quantum gates facilitate the manipulation of qubits, enabling the execution of multifaceted algorithms, which address intricate challenges like factorization of large numbers or unsorted database searches. As mentioned, quantum computing holds the promise of revolutionizing computations. Yet, this potential faces significant hurdles stemming from the fragility of qubits. Analogous to classical bits, qubits are vulnerable to "decoherence," where their quantum states lose coherence due to environmental interactions. This leads to fidelity reduction and quantum computation errors, which amplify the risk of inaccuracies in quantum algorithms. Environmental factors further compound qubit decoherence. Temperature fluctuations and electromagnetic radiation can destabilize the quantum states of qubits, accentuating the need to shield qubits from such interactions. Quantum errors encompass inaccuracies in quantum computations stemming from various sources, including qubit decoherence, gate imperfections, and readout inaccuracies. Addressing these errors necessitates intricate error correction techniques that leverage redundancy to enhance the stability of quantum computations. Efficiently managing quantum errors involves cooling qubits to ultra-low temperatures, nearing absolute zero, to mitigate thermal noise and thus reduce qubit decoherence. Cryogenic systems and innovative cooling methods have emerged as critical tools for maintaining qubit stability and fidelity. Notably, the concept of room- temperature superconductivity offers a transformative avenue in quantum computing. Identifying materials that exhibit superconductivity at or near room temperature could substantially alleviate the cooling requirements of quantum hardware. This breakthrough could streamline quantum device operations, making them more viable for widespread utilization. This paper will explore the distinction between classical bits and qubits, the fundamental units of information in supercomputers and quantum computers. It will also cover the three primary types of quantum computers—gate-based, Annealing-Based, and superconducting quantum computers—alongside the cooling process, popular superconductors, and implementations of LK-99.II. Quantum Bits
Within the world of computing, the conventional method of storing information relies on classical bits—fundamental units characterized by two states: 0 (off) or 1 (on). These binary representations are subsequently translated into data embedded within the web of a computer system. However, this conventional approach carries inherent limitations. Notably, it necessitates longer processing times for intricate simulations and the resolution of complex equations. These time-intensive endeavors arise due to the nature of classical bits, which can occupy only one state at a given moment. Nevertheless, this limitation doesn't imply a barrier to computational advancements.III. Types of Quantum Computers
Gate-based quantum computers
Gate-based quantum computing, a cornerstone of quantum computational paradigms, relies on the orchestration of quantum gates as its foundational building blocks. These gates wield the power to manipulate the quantum states of qubits and pave the way for the execution of intricate quantum algorithms. Quantum gates are often represented by unitary matrices (U) and serve as the linchpin for quantum computation, playing a pivotal role in the processing of quantum information.Annealing-based quantum computers
Annealing-based quantum computers, also known as quantum annealers, represent a distinct category of quantum computing devices meticulously engineered for the resolution of optimization problems. These cutting-edge devices find their forefront in the work of D- Wave Systems, a leading entity in quantum computing technology. The underlying operational principle of annealing-based quantum computers revolves around a process known as quantum annealing. This technique harnesses the inherent inclination of quantum systems to converge towards low- energy states, rendering it invaluable in the domain of optimization problems. Quantum annealing offers a novel approach compared to classical annealing, which explores energy landscapes iteratively. Instead, quantum annealing taps into quantum tunneling and fluctuations, enabling the agile navigation of a more extensive solution space. In the realm of annealing-based quantum computers, problems are aptly translated into discrete energy levels. The ultimate objective is to discern the configuration characterized by the lowest energy state, signifying the optimal solution. Hence, annealing-based quantum computers commence by initializing qubits in a superposition of states, facilitating the simultaneous exploration of multiple potential solutions. This fundamental concept underpins the efficient traversal of solution spacesSuperconducting Quantum Computers
New territory within quantum computing is being explored through superconducting quantum computers, currently in the stages of development. This emerging approach aims to tackle the persistent challenge of decoherence, which affects other types of quantum computers due to interactions with their environment, including gamma rays and temperature fluctuations. A key issue arises as qubits move through circuits, creating heat through friction and resistance. This thermal effect leads to quantum noise, disrupting the integrity of quantum information in the qubits and causing inaccuracies in measurements. Superconducting quantum computers introduce an innovative solution by using a specific type of qubit called superconducting qubits. This approach integrates the unique properties of superconducting materials with the principles of qubits. The result is qubits with extremely low resistance and high conductivity, effectively minimizing heat generation, reducing quantum noise, and countering the negative impacts of decoherence.IV. Cooling Process of QC
V. Superconductors
The superconducting quantum computer stands as a pinnacle in the realm of quantum computing, distinguished by its unparalleled precision in calculations. This extraordinary computational power hinges on the exploitation of quantum properties inherent to superconductive materials, particularly their manifestation of zero electrical resistance $(0 \Omega )$ and perfect diamagnetism $(\mu_0)$.VI. LK-99
The rise of room-temperature superconductors holds immense promise across a spectrum of industries and scientific endeavors. These materials stand poised to usher in a new era of energy efficiency, particularly in the domain of electrical transmission and distribution, where they have the potential to drastically reduce energy losses within power systems. The implications concerning room-temperature superconductors extend to the realm of electronics, paving the way for more compact and efficient devices, spanning applications from consumer electronics to aerospace. In healthcare, these advanced superconductors could catalyze a revolution in medical technology, particularly in optimizing magnetic resonance imaging (MRI) devices, ultimately enhancing the accessibility and cost-effectiveness of healthcare services. Furthermore, the transportation sector stands to benefit significantly, with the promise of faster, more energy-efficient trains and vehicles, potentially leading to a reduced environmental footprint and improved mobility. In the field of quantum computing, the advent of room-temperature superconductors carries profound implications. These materials have the potential to significantly enhance qubit stability, a pivotal factor in the efficacy of quantum computers. Among the many qubit technologies, superconducting qubits have shown great promise, and room-temperature superconductors can provide the stable environment necessary for qubits to maintain their quantum states over extended periods. Moreover, the elimination of conventional cryogenic cooling systems, made possible by room-temperature superconductivity, holds the promise of reducing operational costs and complexities associated with quantum computing. This simplified cooling requirement may also streamline the design and scalability of quantum computing systems, potentially enabling the development of large-scale quantum processors with the ability to solve complex problems. Room-temperature superconductors have the potential to democratize quantum computing by alleviating infrastructure and operational barriers, thereby broadening access to this transformative technology across diverse research fields and industries [63-66]. The unique structural characteristics of LK-99, induced by $CU$ doping at $Pb$ sites, play a pivotal role in conferring its superconducting properties. This stands in contrast to conventional stress- relieving mechanisms observed in $CuO$ and $Fe$- based systems. The mechanism of strain induction, whether from external forces or internal modifications, underscores the broader concept of strain-induced superconductivity. In the case of LK-99, the subtle contraction of the unit cell volume resulting from $CU^{2+}$ substitution for $Pb^{2+}$ serves as an internal pressure proxy, hypothetically initiating superconductivity within Lead Apatite. The synthesis of Lanarkite $(Pb_2 SO_5)$ involves a reaction between $PbSO_4$ and $PbO$ , yielding a white powder upon drying. The phase purity is verified through Powder X-ray Diffraction (PXRD). On the other hand, the synthesis of $$ entails a high-temperature heat treatment at 725ºC for 24 hours after mixing $PbSO_4$ and $PbO$ . $Cu_3 P$ , another essential component is synthesized via a reaction between $Cu$ and $P$ at 550ºC for 48 hours. Combining $Pb_2 SO_5$ and $Cu_3 P$ powders in a 1:1 stoichiometric ratio and subjecting them to a final heat treatment at 925ºC for 10 hours results in the formation of $CuPb_9 (PO_4)_6 O$, known as LK-99. XRD analysis aligns the polycrystalline samples with JCPDS data. Comprehensive assessments encompass phase purity validation, magnetic levitation experiments, and isothermal magnetization (MH) conducted at 280K on an MPMS SQUID magnetometer, elucidating the magnetic properties of LK-99. This breakthrough beckons for further investigation, particularly regarding its implications for the maintenance and operational dynamics of quantum computing. The search mission for room-temperature superconductivity has led to the exploration of several key mechanisms and strategies. Early breakthroughs involved the use of hydrogen sulfide $(H_2 S)$ under extreme pressures exceeding 100 gigapascals (GPa), which showed traces of superconductivity at relatively high temperatures around 203 K (-70°C). While the high-pressure requirement presents limitations, it underscores the possibility of specific materials exhibiting superconducting behavior under extreme conditions. Hydrogen-saturated compounds, containing hydrogen and other light elements like carbon, have also been the focus of investigation as potential candidates for high- temperature superconductivity due to their high hydrogen content and complex crystal structures. The theoretical prospect of metallic hydrogen, a state where hydrogen transforms into a metal, potentially exhibiting superconductivity at very high temperatures, including room temperature, remains a significant challenge in terms of achievement and stabilization under laboratory conditions. Moreover, researchers are exploring derivatives of hydrogen sulfide that could exhibit superconductivity at more manageable pressures and potentially even higher temperatures. Other avenues of research include mechanical strain engineering, complex computational techniques, utilization of organic materials like molecular crystals, exploration of multilayered structures, and systematic investigations into pressure- temperature phase diagrams [67-69].Cryogenic Superconductors
Cryogenic superconductors, exemplified by materials such as yttrium barium copper oxide (YBCO), demand substantial ongoing maintenance expenditures. For instance, an average-sized facility utilizing cryogenic superconductors incurs annual costs of about $1 million for cryogenic cooling infrastructure and maintenance. Additionally, the energy consumption for cryogenic systems can be substantial, exceeding $200,000 annually. Moreover, regular maintenance requirements result in an estimated 5% downtime annually, impacting system reliability and productivity.LK-99 (Room-Temperature Superconductor)
In contrast, LK-99, as a room-temperature superconductor, offers the advantage of eliminating cryogenic costs entirely. This translates to potential annual savings of approximately $1.2 million, encompassing cooling costs, infrastructure maintenance, and energy consumption. Furthermore, the simplified maintenance focus of LK-99, primarily on the superconducting material itself and its integration into the system, leads to a reduction in annual maintenance costs to approximately $50,000.Prospective Financial Advantages of Adopting LK-99
The adoption of LK-99 carries significant prospective financial advantages. By opting for LK-99 over traditional cryogenic superconductors, organizations can achieve estimated annual savings of around $1.15 million. This substantial cost reduction is primarily attributed to the elimination of cryogenic infrastructure and associated maintenance costs. Additionally, LK-99's inherent energy efficiency results in annual savings of approximately $200,000, making it a cost-effective choice for industries reliant on superconducting technologies. Improved energy efficiency not only reduces operational expenses but also aligns with sustainability goals. Another crucial aspect is the enhanced reliability of LK-99 systems, with annual downtime reduced to a mere 1%. This improvement in uptime has a direct impact on system reliability and productivity, reducing disruptions and associated costs. Moreover, industries adopting LK-99, particularly in emerging fields like quantum computing, may gain a competitive edge by simplifying operations and reducing costs. This could potentially enable them to capture a larger share of the market, further enhancing the financial advantages of LK-99 adoption.VII. Conclusion
Examining this research paper has unveiled the influence of LK-99 on the maintenance and operational expenditures associated with quantum computing. It tackles a pivotal challenge that impedes the utilization of quantum computers, namely, the substantial expenses linked to the cooling procedure. This issue could be entirely mitigated by harnessing the room- temperature superconducting capabilities of LK- 99, reducing quantum computing's operational costs by $1.15 million, and enhancing the overall efficiency of the quantum computing process.VIII. References
Abstract
Since gaining independence in 1991, Kazakhstan has actively pursued friendly relations with other nations, particularly with fellow Turkic-speaking states. This paper examines Kazakhstan's role in Turkic integration, its current
and future participation levels, and the potential benefits of Turkic unity for the country. The research employs a mixed-methods approach, including interviews with teachers and surveys among residents of Astana. Secondary
research explores existing literature on Turkic integration and Kazakhstan's involvement.
The findings suggest that Kazakhstan plays a significant role in Turkic integration, with a majority of respondents viewing the level of integration as high and Kazakhstan's role as substantial. The economy emerges as a crucial
sphere of cooperation, with many respondents favoring the development of economic ties. Additionally, there is notable interest in political-military cooperation, including the possibility of Kazakhstan leaving existing alliances
to form a Turkic military alliance.
Overall, Turkic integration is seen as a valuable endeavor for Kazakhstan, offering geopolitical independence and economic benefits. However, the study acknowledges limitations, such as the sample size and geographic scope, and
suggests future research to include a more diverse range of participants and explore Turkic integration from the perspectives of other Turkic nations.
The research aligns with existing studies and supports the notion that Kazakhstan's active participation in Turkic integration has strategic importance for the country. It underscores the significance of strengthening ties with
fellow Turkic-speaking states and pursuing a deeper level of cooperation, especially in the economic and political-military spheres.
I. Introduction
Kazakhstan since gaining independence in 1991, has been actively developing friendly relations with other nations. One of the first countries with which our country established diplomatic relations was Turkey. In 2009, the Turkish Council was formed, which later transformed into the Cooperation Council of Turkic-Speaking States during its 8th summit. Our country has established close cooperation with Uzbekistan, and strong economic and political ties have been forged with other Turkic- speaking countries, such as Turkey, Uzbekistan, Azerbaijan, Kyrgyzstan, Turkmenistan. This process of Turkic integration is often referred to as the "Turkic Council" formation, but its implications go beyond just political cooperation. However, in general, does this integration benefit our country? Kazakhstan's foreign policy is multi-vector strategy, and thus, Turkic integration is important to be one of the core directions of this strategy. It is evident that Kazakhstan's foreign policy aims to enhance economic ties and military alliances through Turkic integration. This is evident from the establishment of diplomatic relations with Uzbekistan and the news of joint military exercises with Turkey in July 2023. Each step in Kazakhstan's geopolitical neutrality reflects the country's future aspirations. The Kazakh government pays close attention to strengthen its ties with Turkic-speaking countries, as it understands the potential benefits and changes this integration could bring. This research will analyze about Kazakhstan being the leading country in Turkic integration process. This Turkic union fulfills strategic development of Kazakhstan's foreign policy and will benefit at least in terms of geopolitics. Nowadays, Kazakhstan is actively participating in integrating Turkic nations and thus, the purpose of the research: To compare Kazakhstan's current and future participation levels in Turkic integration and determine the potential benefits of Turkic unity for our country. There are some of the question that will be covered throughout the research:II. Context
III. Methods
Secondary research method was about relying on academic articles related to the Turkic integration process and the role of Kazakhstan in it. Primary research method includes a survey. One online survey to conduct this research was created on the platform Google Forms. Each participant was notified that the answers will be collected for this research purpose only. The questions and answers were in both: Kazakh and English language versions. This study does not cause any risk to the participants, as it does not collect any personal data. The survey would be held among adults who live in Astana. Necessary materials for the research are as following: Internet for surfing information and organizing the survey.IV. Results
V. Discussion
The research methodology employed a combination of primary and secondary research methods, including surveys among students, as well as a review of existing literature on Turkic integration. These methods provided a comprehensive understanding of the perspectives of educators and the general public in Kazakhstan, but also it is worthy to mention that not small amount of people provided their answers as "I cannot answer". More people did not choose the particular answer, less accurate the results would be. For example, in the questions about should Kazakhstan leave the existing Russia leading economic and military-political alliances in order to create the alternative versions for Turkic states, 43.9% and 45.5% of people could not choose the either side of the answers: "Yes" or "No". The survey results indicate that a significant majority of respondents view Kazakhstan's relations with Turkic nations as strong and express a desire for further improvement. Economic cooperation, tourism, culture, and science are identified as key areas of collaboration. While there is some division on the issue of leaving existing alliances, a substantial portion of respondents are in favor of Kazakhstan pursuing a Turkic economic and military alliance. Many people chose Tourism, Economy, Culture and Science as the top spheres, in which Kazakhstan must cooperate with other Turkic states. Since only 1 person chose the option "I have not chosen Yes", it means that the majority is in favor of development of relationships in many spheres. It means that the cooperation is beneficial for Kazakhstan and Kazakhstan should continue to integrate. It is also important to note that S. AbzalVI. Conclusion
Overall, all three questions of this study were answered. The research supports already existing studies and the statement about what should be done by the government of the Republic of Kazakhstan in the context of the integration of the Turkic states. Firstly, the survey and the secondary research have revealed that Kazakstan actively participates in the integration process. So actively, that the majority of people see the level of integration as high and Kazakhstan's role as a significant state. Kazakhstan is the participant of many Turkic unions and even its city Turkistan became the cultural capital of the Turkic world.VII. Evaluation
The research was written to investigate the process of Turkic integration and, especially, the role of Kazakhstan in it and to determine the benefits of Turkic integration to Kazakhstan. The research conducted in this study provides valuable insights into the dynamics of Turkic integration and its implications for Kazakhstan. The results gathered form the survey and secondary research support the main idea of the research that Kazakhstan actively participates in the Turkic Integration. Further research in the field of Turkic integration and Kazakhstan's role should encompass regional variations in attitudes, delve into policymakers' perspectives, conduct comparative analyses with other regional alliances, assess the impact of public awareness campaigns, and track evolving public opinion over time. Additionally, studies should investigate the influence of education on public perceptions, analyze the economic effects of integration, and include comparative case studies of other Turkic-speaking nations. Such research is vital for a comprehensive understanding of Kazakhstan's integration efforts and their implications, informing policy decisions and fostering informed public discourse. What is more, surveyed people must be provided with the information related to the topic that must not be biased, so that they would less choose the option "I cannot answer".VIII. References
Abstract Although neutron stars are one of the most fascinating objects in the universe, they each have their own special characteristics, such as their magnetic field, deformation rate, and rotating speed. The lifetime of neutron stars that end up in black holes is facing insufficient focus in research and exploration. There is not enough data on this phenomenon, whereas studying this phenomenon opens doors in space exploration and research in this field. The objective of this research is to help scientists, astronomers, and space agencies be able to classify and detect these phenomena in space, improve their understanding of them, and make their space journeys more efficient and easier. This research will make a huge contribution to the understanding of this phenomenon, in addition to collecting more data on it that could turn space exploration into its highest state. A proposed supervised machine learning model (ML) was implemented based on a decision tree to solve multiple equations derived from popular theories, such as the equation of state of neutron-rich and dense matter and the Tolman-Oppenheimer-Volkoff limit. To classify the evolution of neutron stars in binary systems, whether they are about to collapse into black holes or not. Decisions would be based on the mass of the neutron star, and since the data gathered had fewer than clear results, the accuracy of the model was 100%, knowing that when having more sufficient data and following the methodology presented here, the accuracy would differ.
I. Introduction
The purpose of studying neutron stars is to understand the behavioral mechanisms of matter and the nature and structure of both the universe and gravityII. Literature Review
i. Formation of Black Holes
According to NASA space discoveries, massive black holes could form when a supermassive star collapses. Relatively small black holes might form if two neutron stars merge, producing ripples called gravitational wavesii. The Fate of Neutron Stars
iii. The History of Neutron Stars
iv. Detecting Neutron Stars
The Gamma-ray Large Area Space Telescope (GLAST) will allow astronomers to detect even the most energetic and youngest pulsars in the Milky Way galaxy and study the acceleration of particles in spacev. Discovering Black Holes
The existence of black holes was postulated in the 1960s. The complete assurance of their discovery was in the late 1990s in the Milky Way and other nearby galaxies. Since that time, huge theoretical and observational research has been made to understand the astrophysics of black holesvi. Usage of Machine Learning in Astrophysics
Machine learning models in astrophysics are often used on manually labeled data to automate tedious tasks on large survey datasets. However, there is a trained model on real simulations identify galactic globular clusters (GCs) that have black hoes within it. The goal was to help observers search for stellar- mass black holes and to better understand the dynamical history of black hole clusters by identifying and studying them individually. The Model was selected for 18 GCs transparent enough to accommodate a black hole subsystem. The clusters designated by the ML classifier include M10, M22 and NGC 3201vii. Gravitational Waves Detectors
There are several gravitational wave detectors, including the Laser Interferometric Gravitational Wave Observatory (LIGO) in the USA and Virgo and GEO in Europe. Although these devices' success is not fully guaranteed, soon the global network of gravitational wave detectors will be sophisticated enough to record many signals of astronomical origin, large enough to allow the analysis of waveforms that can reveal the structure of the sources, and sufficiently extensive and superfluous enough to show the location of gravitational waves. Sources in the sky through triangulation with this functional network it should be possible to verify the fundamental physics of gravitational waves as predicted by general relativity. Although advanced LIGO is expected to enable key detection and astrophysics in most categories of gravitational wavesviii. The Usage of Interferometers
ix. Latest Updates
The third round of LIGO observations (O3) revealed several candidates for a neutron star-black hole merger (NSBH)x. Measurement Methods of Neutron Stars' masses
Radio observations of rotating pulsars have been shown to provide the most sensitive and precise measurements of neutron star massIII. Methods
The machine learning model was based on the Decision tree algorithm, which has the function of solving a sequence of equations utilizing inputs from the dataset, such as neutron star masses, Schwarzschild radius, actual radius, density, gravitational waves, and pressure. Multiple datasets would be gathered and combined to match the needed parameters for each equation. The model will be developed in such a way that it could first measure each star's Schwarzschild radii and compare them to the actual radii to determine which ones were most likely to collapse into a black hole This was regarded as the first selection method to reduce the number of neutron stars believed to be on the verge of collapsing into a black hole. This would be achieved by providing the model with: $$ \displaylines{ Star = \begin{cases} Black hole, & if R_{SC} = R_B \\ Neutron star, & if R_{SC} \lt R_B \end{cases} } $$ where RSc is the Schwarzschild radius and RB is the actual radius. Sense we have the condition that states that the star is black hole if the Schwarzschild radius is greater than or equal to its actual radius and it is a neutron star if the Schwarzschild radius is less than its actual radius. The second selection is to measure the critical mass, at which point the neutron star would collapse into a black hole if any further mass was added, as defined by the equation of state (EoS) and the Tolman-Oppenheimer Volkoff limit.IV. Results
The model was trained in a way that it could predict the 'Target' column of the dataset by splitting the data into 80%: 20%. This resulted in 100% accuracy, precision, recall, and F1-score values. This was later interpreted as a result of the few samples contained by the dataset, the well-cleaned data where it was well-separated and easily distinguishable by the features used in the model and lastly the model overfit the training data where it memorized the samples instead of learning general patterns. A Hierarchical cluster graph was implemented by the model through visualizing the data of the two columns 'PSR name' and '$Mp\:(M_{\odot})$'. Another scatter plot diagram was graphed. It visualizes the data of the two columns, the 'Target' and the '$Mp\:(M_{\odot})$' column.V. Discussion
i. Contribution to Space discovery
Our proposed machine-learning model facilitates the observation and understanding of the neutron stars' lifetime. It offers the opportunity to dive deeper into the Big Bang theory and the formation of the universe, additionally, it's a complement to the science of nuclear physics and the nature of matter. Within a few seconds, data collection, analysis, and decision-making would be accomplished whereas solving multiple equations would require days.ii. Scientific theories and equations
1. Einstein's theory of general relativity: The maximum mass of a neutron star's equilibrium configuration as hypothesized by Einstein's theory of general relativity, Le Chatelier's principle, and the principle of causality wouldn't exceed 3.2 M (solar mass). When the equation of state of matter is unknown in a small range of densities, the extremal principle presented here also appliesiii. Achievements
Easy classifications of neutron stars whether they are near to collapsing into a black hole or not were made by the ML model without the need for solving complicated equations where the same function was achieved by the model with a nearly high accuracy compared to the found datasets. Devices, such as LIGO and Virgo, and models were proposed to be merged and under the control of an IoT platform, they were capable of interchanging real-time data and presenting high processing and analysis capabilities to discover the miracles of Astrophysics and the secrets of Artificial Intelligence.iv. Limitations and roadblocks
Linking our model to LIGO and Virgo devices to continuously be provided with real-time data wasn't accomplished due to time constraints and not knowing all the aspects of the subject since the main goal was to ensure the validity of our experiment. Due to the lack of sufficient datasets related to the equations' inputs led to a slightly low accuracy and inability to implement a whole prototype performing the functions defined, as a result, a small, limited prototype was implemented to test both the accuracy and the capability of it to perform a similar idea.v. Future Plans
One of the recommended additions is linking the real-time data provider devices to the model to both increase the accuracy by increasing the model training time on various data and be ready for space agencies to use and conduct their research. Furthermore, developing the model to be able to predict the conversion of the neutron star into a black hole in what function of time and the probability of the conversion. The main and most important future plan is to construct a more sufficient and successful model by implementing the methodology recommended in this research, due to the fact that the data provided was not enough.VI. Conclusion
The machine learning model that was proposed here was a supervised one, implemented on the decision tree model, to solve several equations in an untheoretical, much easier way that can help to explain and discover this phenomenon more. The equations that are purposed are based on significant theories such as Einstein's general theory of relativity, the Tolman-Oppenheimer-Volkoff limit, the equation of state for cold, neutron-rich matter, the Schwarzschild radius, and gravitational waves. Machine learning is a remarkable method in astrophysics as it produces much more accurate results than solving equations theoretically, and it can be linked to tools in space. The decision of the model is going to be based on real-time analytics from the Laser Interferometer Gravitational Waves Observatory (LIGO) and its other devices. The parameters used to classify this phenomenon include mainly mass, radius, density, and pressure. As some hardships were faced, the methodology process was not fully clear and the accuracy measured was 100%, it is expected that the accuracy will differ when the clearest methodology is implemented. However, this remarkable method will face more and more improvements later on.VII. Acknowledgements
First and foremost, we would like to express our sincere gratitude to Mostafa Mostafa, our mentor, for keeping an eye on us, leading us in the right direction, and devoting a significant amount of his time and effort. We also want to thank the Youth Science Journal for providing young people with the chance to be mentored in their research process by offering qualified research resources. We also want to thank everyone who assisted us academically in the domains of computer science and astrophysics, particularly Abdelrahman Bayoumy, who supported the scientific base upon which our research is built provided the extra space catalog that allowed us to collect enough data on stars for our research.VIII. References
Abstract Mathematics has always been the mother of sciences. The main reasons behind this are the broadness of mathematics and its compelling ability to translate theory into laws and algorithms to help us understand the universe better. The discovery of imaginary numbers was a critical moment in the history of mathematics, extending its horizon by solving undefinable polynomials with such a revolutionary idea. This paper aims to clear the common misconception about the existence of a finite number of numerical systems, explain their applications, and extend basic algebraic properties to conclude their origin. The focus of this paper is on the abstract mathematical approach to higher-dimensional complex systems, or hyper- complex number systems, of Quaternions and Octonions, discussing the historical background of these systems, the related fundamental algebraic concepts, their construction, properties, operations, and finally their real-life applications. Hyper- complex number systems are not only beneficial in computer science and theoretical physics but also groundbreaking within the fields of mathematics. Accordingly, this paper summarizes the findings throughout the history of hyper- complex numbers and demonstrates their ability to be applied in physics, quantum mechanics, computer graphics, and more.
I. Introduction
Hypercomplex numbers, one of the most significant contributions to the field of mathematics, are a generalization of complex numbers and extensions to the widely known two-dimensional complex systemsII. Groundwork: Algebraic Concepts
i. Elementary Definitions:
To set off the journey of the hyper-complex numbers, it is essential to construct some elementary definitions. According to the elementary algebra the real numbers $\mathbb{R}$ is the set of all real values and they are represented as a one-dimensional line. The complex numbers $\mathbb{C}$ were formulated depending on the i or in simple terms the imaginary number [9, 10, 11]. $$ i = \sqrt{-1} $$ The complex numbers are two-dimensional numbers and are in the form of $$ z = a + bi $$ Were a, b $\in \mathbb{R}$. Each complex number consists of a real part "a" and imaginary part "bi" [9, 10, 11].ii. Abstract Definitions:
After dealing with some elementary high school concepts, it is time to introduce the required abstract concepts to start our journey. While dealing with the hyper-complex numbers, vector spaces will be finite-dimensional modules of over $\mathbb{R}$ [7, 13, 14, 15, 16]. Vector space is a set $V$ whose elements are called "Vectors," generalizing the concept, vector spaces are "commutative groups" under addition. Nevertheless, vector spaces are even further than commutative groups. Vector spaces can be scaled [13, 14, 15, 16]. $$ \vec{V} = (v_1, v_2,v_3, ..., v_n) \; \& \; c \in \mathbb{R} $$ $$ c \cdot \vec{V} = (c \cdot v_1, c \cdot v_2,c \cdot v_3, ..., c \cdot v_n,) $$ "c" is called a scalar. Scalars are considered as fields F. Thus, $v \in V$ is a vector and $f \cdot v \in V$ is a scalar $\rightarrow f \cdot v \in V$ (a "scaled vector") [14, 15, 16]. An algebra A will be a vector space that is equipped with a bilinear map (a function combining elements of two vector spaces to yield an element of a third vector space), $m: A \times A \rightarrow A$, this property is called "multiplication" that is abbreviated as m [7, 13, 16, 17]. There is an element $1 \in A$ such that $m(1,a) = m(a, 1) = a$. The operation called multiplication can be abbreviated as $m(1,a) = ab$Theorem 1: The Real numbers $\mathbb{R}$, the complex numbers $\mathbb{C}$, the quaternions $\mathbb{H}$, and the octonions $\mathbb{O}$ are the only normed division algebras. Moreover, The Real numbers $\mathbb{R}$ , the complex numbers $\mathbb{C}$, the quaternions $\mathbb{H}$, and the octonions $\mathbb{O}$ are the only alternative division algebras. Additionally, all division algebras have dimension 1, 2, 4, or 8
The previous theorem was likely a combination of three theorems to relate and generalize the properties of the Real numbers $\mathbb{R}$, the complex numbers $\mathbb{C}$, the quaternions $\mathbb{H}$, and the octonions $\mathbb{O}$. The concept of the $\mathbb{R}, \mathbb{C}, \mathbb{H}$ , and $\mathbb{O}$ and $\mathbb{O}$ being the only normed division algebras was discovered by Hurwitz in 1898III. Historical Exploration Through Higher- dimensional Complex Numbers
The ancient Greeks claimed to be the first "true" mathematicians to think of numbers as quantities for measurement, not as something abstract. Accordingly, Mathematics back then was best described as 'The Science of Quantities": Lengths, areas, volumes, etc.i. The History of Complex Numbers
A cubic equation associated with a problem on Arithmetica by Diophantus (AD 200-AD 284) was as follows: $$ x^3 + x = 4x^2 + 4 $$ It was noy known how the solution was determined to be 4, but it was expected that Diophantus simplified the equation to the form: $$x(x^2 + 1) = 4(x^2 + 1)$$ The value of x as 4 can satisfy this equation, but the solutions to similar special cubic remained a questionable manner. Although Fra Luca Pacioli (1447-1517) stated in his Summa de Arithmetica, Geometria, Proportioni, et Proportionalita that there's no solution for such cubic. several mathematicians, especially Italian scholars, nevertheless, insisted on making attempts to find a solutionii. The History of Quaternions
The leading character of this section, William Rowan Hamilton (1805-1865), was able to construct complex numbers from real numbers, complementing the work of fellow mathematicians, namely Augustus De Morgan (1806-1871) and George Peacock (1791-1858), who aimed for justifying the use of harmful and complex numbers. Hamilton studied the operations of complex numbers in the two-dimensional plane and the geometrical interpretations of these operations. As a physicist, he knew how necessary it is in physics to involve problems in three-dimensional spaces. He suggested that it must be possible to develop a system of such operations in three dimensions and even n dimensionsiii. The History of Octonions
"If with your alchemy you can make three pounds of gold, why should you stop there?" asked John T. Graves in a letter in which he was replying to Hamilton, who happened to be his dear friend from college, congratulating him on the birth of his brilliant new idea of quadruplets. On December 26th of the same year, Graves wrote to his friend about an eight-dimensional norm division algebra, which he named "Octaves." Hamilton did not publish his friend's work at the time. Consequently, young British mathematician Arthur Cayley (1821-1895), who showed his interest in Hamilton's theory of quaternions since the announcement of their existence, published a paper that included the same idea of Grave's octonions in March 1845, and they became known as "The Cayley Numbers" [7, 39, 40, 41].IV. Constructing The Hyper — Complex Numbers
i. Quaternions:
Since the complex numbers were constructed $z = a + bi$ in a form of "duel or double" system. We likely are going to consider the form: $$ z = a + bi + cj $$ Where $a, b, c \in \mathbb{R}$ and $i$ and $j$ are certain symbolsii. Octonions
The octonions were discovered by the Irish mathematician John T. Graves, a friend of William Rowan Hamilton in the year 1843 in order to generalize the study of quaternions and extend its ideas* | $e_0$ | $e_1$ | $e_2$ | $e_3$ | $e_4$ | $e_5$ | $e_6$ | $e_7$ |
$e_0$ | $e_0$ | $e_1$ | $e_2$ | $e_3$ | $e_4$ | $e_5$ | $e_6$ | $e_7$ |
$e_1$ | $e_1$ | $-1$ | $e_4$ | $e_7$ | $-e_2$ | $e_6$ | $-e_5$ | $-e_3$ |
$e_2$ | $e_2$ | $-e_4$ | $-1$ | $e_5$ | $e_1$ | $-e_3$ | $e_7$ | $-e_6$ |
$e_3$ | $e_3$ | $-e_7$ | $-e_5$ | $-1$ | $e_6$ | $e_2$ | $-e_4$ | $e_1$ |
$e_4$ | $e_4$ | $e_2$ | $-e_1$ | $-e_6$ | $-1$ | $e_7$ | $e_3$ | $-e_5$ |
$e_5$ | $e_5$ | $-e_6$ | $e_3$ | $-e_2$ | $-e_7$ | $-1$ | $e_1$ | $e_4$ |
$e_6$ | $e_6$ | $e_5$ | $-e_7$ | $e_4$ | $-e_3$ | $-e_1$ | $-1$ | $e_2$ |
$e_7$ | $e_7$ | $e_3$ | $e_6$ | $-e_1$ | $e_5$ | $-e_4$ | $-e_2$ | $-1$ |
V. Algebraic Operations, Multiplications Diagrams, and Mathematical Definitions
i. Quaternions
ii. Octonions
The addition operation of two octonions is identical to the complex numbers and the quaternions [7, 44, 46]. Thus, let $a, a' \in \mathbb{O}$ $$ \displaylines{ a + a' = (a + a') + (b + b')e_1 + (c + c')e_2 + (d + d')e_3 + \\ (e + e')e_4 + (f + f')e_5 + (g + g')e_6 + (h + h')e_7 } $$
Note: The octonions multiplication is a non-commutative operation. Moreover, the octonions multiplication is also a non-associative operation [7, 18, 44, 46]. These properties can be verified through the
following example
VI. Cayley — Dickson Construction
The Cayley — Dickson construction is an algebraic construction that relate normed division algebras $\mathbb{R, C, H, O}$VII. QQM (Quaternion Quantum Mechanics)
Quantum mechanics is a foundational theory in modern physics that aims to describing physical phenomena and properties of nature on an atomic — quantum — scale. Many scientists along the years tried to find the correct interpretation of the quantum mechanics theory as it might guide us to the ability to fully describe the behavior of our universe. The quaternion quantum mechanics QQM represented a significant benefaction that might answer the central question of quantum mechanics interpretation. The quaternion quantum mechanics was proposed for the first time in the year 1936 by Birkhoff and J. von NeumannNote: $DD\sigma = -\Delta \sigma$, thus equation (7.9) links quaternion quantum mechanics to reality in $\mathbb{R}^3$
After stating the required fundamentals to work with the quaternions, we can start to link the quaternions with the reality. Deformation fields represents the vector field representation when a force is applied to an object, they are either compression (irrotational) or twist (rotational). The compression field is denoted by $\sigma_0 = div \: u$ and twist field is denoted by $\hat{\phi} = rot \: u$. Helmholtz made a use of quaternions by proposing the Helmholtz decomposition, furthermore he proved that any deformation field $u$ can be decomposed to a compression field $u_0$ and a twist field $u_{\phi}$$e =$ energy per mass unit in the deformation field $\sigma = \sigma_0 + \hat{\sigma} \; , \; \sigma^* = \sigma_0 - \hat{\sigma}$ $\hat{u} = \frac{\partial u}{\partial t}$
Stationary wave ≡ particle $m$ in $\Omega$$E_m (\Omega) =$ total energy in the deformation solid $\tilde{V} (x) =$ external field
By substituting $\psi \sqrt{\frac{\rho p}{(2m)}} \sigma$ in the equation (7.17) of the total energy. We get, $$ \displaylines{E_m (\Omega) = mc^2 \int_{\Omega} \biggl( \frac{m_P}{m} \frac{\rho p}{2m} (\frac{\hat{u}}{c} \cdot \frac{\hat{c}^*}{c}) + \psi \cdot \psi^* \\ + \frac{2m}{m_P c^2} V(x) \psi \cdot \psi^* \biggl) dx }$$ Let’s use the Cauchy – Riemann operator $D$ such that$$ \displaylines{ \underbrace{\frac{\hat{u}}{c}}_\text{normalized velocity} + \underbrace{-l_p D \sigma}_\text{normalized gradient of mechnical potential} } $$
$$ \displaylines{E_m (\Omega) = mc^2 \int_{\Omega} \biggl( \frac{m_P}{m} \frac{\rho p}{2m} (D\sigma \cdot D\sigma^*) + \psi \cdot \psi^* \\ + \frac{2m}{m_P c^2} V(x) \psi \cdot \psi^* \biggl) dx }$$ Then by minimizing the expression we get The Du Bois Reymond lemmaVIII. Three-Dimensional Rotation
One of the main applications of quaternions is the three-dimensional rotation that describes the attitude of a rigid body. Before getting to using quaternions to represent three-dimensional rotation, we will briefly explore other approaches.i. Euler Angles
Euler angles is a common method to describe orientation as a sequence of three rotations about three mutually perpendicular axes. To do so, a widely used method is the "heading-pitch-bank" system that performs the rotation according to the following steps:ii. The Axis-Angle Representation
Euler's rotation theorem states that three- dimensional rotation can be accomplished via one rotation about one axis instead of 3iii. Quaternion 3-D Rotation
As previously mentioned, the set of quaternions define the elements in $\mathbb{R}^4$, but an alternative representation of quaternions is defining them by two parts: a scalar (real) part and a vector part in $\mathbb{R}^3$. By this we can represent a quaternion q as $$ q = q_0 + q = q_0 + iq_1 + jq_2 + kq_3 $$ where $q_0$ is the scalar and $q$ is the 3-D vector. A quaternion with a $q_0$ value of zero is called a Pure Quaternion. Thus, the product of a vector and a quaternion is the same as the quaternion product of a quaternion and a pure quaternionIX. Conclusion
After the preceding investigation, we can conclude that the study of the higher-dimensional complex numbers is a vital field of mathematics, specifically abstract algebra, engaging in various applications and areas of study. There are four known norm division algebras: Real Numbers $\mathbb{R}$, Complex Numbers $\mathbb{C}$, Quaternions $\mathbb{H}$, and Octonions $\mathbb{O}$ with dimensions 1,2,4, and 8, respectively. The discovery of complex numbers went through hundreds of years between acceptance and disapproval, from unveiling the existence of the square root of $-1$ to Gauss's construction of the two-dimensional complex plane. In 1843, Hamilton crowned his intensive work on complex numbers and their generalization with his discovery of quaternions: Associative, non- commutative under multiplication four-dimensional algebras with the imaginary units $i, j$ and $k$ . Hamilton's rules of multiplication: $i^2 = j^2 = k^2 = ijk = -1$ . In the same year, John T. Graves generalized the study of Hamilton by extending his Quaternions to eight dimensions, constructing the Octonions: Non-associative, non-commutative under multiplication eight-dimensional algebras of the form $\mathbb{O} = \{ a_0 + \sum^7_{i=1} a_i e_i : a_1, ..., a_6 \in \mathbb{R} \}$, where $e = \sqrt{-1}$. The Cayley-Dickson Construction is a method developed by mathematicians Arthur Cayley and Leonard Dickson used to obtain new algebras from old algebras by defining the new algebra as a product of an algebra with itself by conjugation. Consequently, this construction gives us the reasons why octonions are larger than quaternions and quaternions can fit into the set of octonions, and so with complex and real numbers. Additionally, it tells us why $\mathbb{H}$ is non-commutative under multiplication and why $\mathbb{O}$ is non-associative. Working with imaginary numbers in such ways may seem ambiguous and ridiculous. Hyper-complex numbers are crucial to quantum mechanics since they might be the key to find the correct interpretation of quantum mechanics theory. Furthermore, quaternion rotation forms the fundamentals of kinematic modeling in robots, and octonions are essential in other branches of abstract algebra. Thus, this paper is an insight into the world of insane imaginary numbers with a fascinating demonstrated ability to be applied physically in the real world.X. References
Abstract We reviewed the relationship between clusters of galaxies and the ΛCDM(Lambda Cold Dark Matter) model of the universe. Since the formation and the characteristics of galaxy clusters support the theory behind the ΛCDM model, we discussed the formation process of galaxy clusters, their morphological characteristics, different observational techniques, and the parameters of the universe. We also mention methods from other cosmological probes that agree with galaxy cluster observations, hence, validating the ΛCDM as the standard model of the universe.
I. Introduction
The ΛCDM model serves as the prevailing cosmological framework that describes the evolution and structure of the universe, as it has developed over the decades, providing a fundamental understanding of our cosmos. The historical development and theoretical foundations of the ΛCDM model can be traced back to the early 20th century with Einstein's general relativity solutions as the model's theoretical bedrock. Over time, advancements in observational astronomy, along with theoretical insights, have led to the refinement and formulation of the model. Dark matter, a non-luminous and elusive form of matter, is a key assumption of the ΛCDM model which dominates gravitational interactions on cosmic scales, providing gravitational scaffolding for the formation of galaxies and large-scale structures. Furthermore, dark energy, an enigmatic form of energy that permeates the universe, is incorporated into the model to account for the observed accelerated expansion of the universe denoted by the Hubble constant Ηo. Cosmological parameters characterize the properties of the universe, providing intrinsic quantities that encode information about its fundamental attributes. Determining the cosmological parameters of the universe such as the total density parameter and the dark energy parameter, decides the composition of matter in the universe and determines the dynamics within the universe. Determining these cosmological parameters applies to many theories of gravity and particle physics. However, to constrain these parameters we use galaxy clusters, large-scale cosmic structures bound by forms of gravity, which serve as laboratories for refining the model of the universe. Their abundance, spatial distribution, and mass measurements, measured through optical and x-ray surveys, are used to test and refine theoretical predictions, providing critical constraints that allow us to distinguish between different cosmological scenarios. We start by examining the formation and evolution of galaxy clusters. Furthermore, we discuss observational methods that are used to infer data from clusters of galaxies, providing examples of some surveys, and satellites. Moving on, we describe the morphological characteristics of galaxy clusters, such as their shapes, and distribution. Finally, we define the cosmological parameters of the universe, and their implications, mentioning examples from other studies, and validating them through comparison with other cosmological probes.II. GALAXY CLUSTER FORMATION AND EVOLUTION
This chapter explores the fascinating realm of galaxy cluster formation and evolution. We delve into the hierarchical structures that emerge from initial density fluctuations, the gravitational collapse and merging processes that shape these clusters, and the assembly and growth of these colossal objects. Additionally, we delve into the crucial roles played by dark matter and dark energy, unraveling their gravitational influences on structure formation and their effects on cosmic expansion and cluster dynamics.A) Hierarchical structures formation
In this subsection, we explore three key aspects related to the formation and evolution of galaxy clusters: initial density fluctuations, gravitational collapse and mergers, and the assembly and mass growth of galaxy clusters. We then examine the process of gravitational collapse, in which gravity causes celestial bodies to attract matter toward their center of mass, leading to the formation of dense structures. Finally, we study the mass assembly and evolution of galaxy clusters, address the challenges of applying the models developed to smaller structures, and highlight efforts to stop to understand the relationship between ordinary matter and dark matter. 1) Initial density fluctuations The probability distribution function (PDF) of the cosmological density fluctuations is an essential characteristic of the universe's large-scale bodies, like galaxy clusters. The random Gaussian distribution is what the PDF of the primordial density fluctuations responsible for the universe's current structures is assumed to obey in the standard picture of gravitational instability. The PDF stays Gaussian as long the density fluctuations are in the linear range. However, because of the substantial nonlinear mode coupling and the nonlocality of the gravitational dynamics, their PDF significantly departs from the original Gaussian form once they reach the nonlinear stage.B) Role of dark matter and dark energy in the universe
We explore the role of dark matter and dark energy in the universe deeply, as they are two mysterious components that have a significant impact on the formation, dynamics, and expansion of structures. Dark matter, although invisible, exerts a gravitational influence on the formation of structures, such as galaxy clusters, which scientists can deduce by observing its effects on light. On the other hand, dark energy is a mysterious form of energy that affects the expansion of the universe. 1) Dark matter gravitational influence on structure formation According to recent studies, the universe is composed of 5% of ordinary matter. The rest of the universe is composed of dark matter and dark energy. Dark matter is not really dark, but invisible. Though the dark matter is assumed to be transparent, it has mass. Scientists were able to calculate the mass of dark matter because of its gravitational influence. Scientists can calculate the amount of dark matter in a galaxy cluster by observing how gravity affects light. This is called gravitational lensing which provides how much mass in a cluster and where it is. Gravitational lensing makes the light from a single source take many paths, providing us with many images from different angles for one source. In terms of its overall contribution to the total mass and energy of the cosmos, scientists used variations in the cosmic microwave background, estimating that dark matter makes up around 27% of the universe. Dark matter's gravitational influence and use of gravitational lensing help scientists calculate its mass and location, contributing to our ongoing understanding of the universe's composition.III. OBSERVATIONAL METHODS AND DATA
Studying celestial bodies involves a collection of many observational methods and data sets. Classifying these objects depends on analyzing the received data. In this section, a review of the existing methods is provided, highlighting their significance in astrophysics research. Additionally, the challenges and limitations are addressed, emphasizing the importance of careful analysis and interpretation of these techniques.A) Optical observations
Optical observations focus on analyzing the apparent characteristics of galaxies and the bodies within them. Imaging and morphological analysis of galaxies provide a comprehensive understanding of the formation, evolution, and visual classification of galaxies. Then, the discussion moves to the redshift survey analysis for the identification of galaxy clusters. Redshift surveys enable the determination of galaxies' velocities and distances, shedding light on the dynamics and distribution of galaxies across cosmic scales. 1) Imaging and morphological analysis of galaxies: As galaxies have different morphological characteristics, analyzing their complex shapes and structures reflects the past, present, and future of the universe which helps study galaxies and galaxy clusters. Many schemes are used in identifying, many of which depend on morphological features such as the number of spiral arms, the number of nuclei, the size of the bulge, etc. Imaging galaxies depends on several methods whether on space or ground which will be discussed forward. Before the advancement of technology, all the data were taken and analyzed manually. While human analysis is mostly accurate, with the advancement of surveys, it would take a longer time to analyze data. In the galaxy zoo project, it took 3 years to analyze ~300,000 galaxies. Thus, researchers have developed machine learning algorithms to classify, for instance, galaxies at a faster rate that keeps up with modern surveys. Although using machine learning algorithms helps in saving time and effort, it has certain amounts of uncertainty which has been an ongoing research area to maximize the efficiency of these algorithms.B) X-ray observations
X-ray observations depend on unraveling the contents of the universe through studying X-ray wavelengths. We first discuss gas temperature, density, and X-ray emissions which provide an understanding of the hot ionized gasses within astronomical structures. Additionally, the determination of cluster mass and baryon content is discussed, emphasizing the role of X-ray observations in studying the gravitational effects and the distribution of matter within galaxy clusters. 1. Gas temperature, density, and X-ray emission: X-ray emissions occur due to the hot gasses that fill the space between galaxies known as the intra-cluster medium (ICM for short). The ICM consists of ionized hydrogen and helium, along with heavier elements, which emit X-rays. Densities and temperatures of the ICM can be measured using X- ray surveys which by analysis give the distribution of gasses within the cluster. X-ray emissions assume the presence of cool cores within the cluster, characterized by a drop in X-ray emission and a peak in the surface brightness of the X-ray emitting gas, where the temperature of the gas is less than that of the surroundings. They appear due to cooling flows in which the gasses cool and fall towards the center of the cluster, releasing gravitational potential energy and heating up again. On the other hand, recent observations suggest that the cooling flows aren't as strong as they are thought to be, and there is still much debate on the regulations of gasses within clusters. X-ray observations have also revealed other structures within galaxy clusters, such as filaments, bubbles, and cavities. These structures are thought to be associated with various physical processes occurring within the cluster. For example, gas sloshing can create spiral patterns in the X-ray emission, while active galactic nuclei (AGN) feedback can create bubbles and cavities. Mergers between clusters can also produce shocks and turbulence that contribute to X-ray emission. In addition to thermal emission, X-ray observations have also detected non-thermal emission from clusters of galaxies which arise from high-energy particles accelerated by shocks in the cluster environment. The presence of non-thermal emission provides insights into the physical processes occurring in clusters such as particle acceleration and magnetic field amplification.C) Gravitational lensing
Gravitational lensing presents a technique for studying the bending of light by massive objects, revealing information about the distribution of mass in the universe. We first discuss the examination of the strong and weak lensing effects on background galaxies, highlighting how the distortion and magnification of light can be used to probe the mass distribution of foreground structures. Additionally, the discussion covers mass reconstruction and gravitational potential mapping, which involves the modeling and mapping of the gravitational lensing signal to infer the distribution of dark matter. 1) Strong and weak lensing effects on background galaxies:D) Surveys and data sources
Surveys and data sources are a fundamental part of astrophysical research. In this part, we discuss several surveys and data sources that aided in the study of the cosmic microwave background and the discovery of astronomical bodies such as galaxies. 1) Planck satellite data and microwave observations:IV. MORPHOLOGICAL CHARACTERISTICS
Morphological characteristics of galaxy clusters refer to the physical and structural properties that describe the appearance and arrangement of galaxies within these massive cosmic structures. Galaxy clusters come in different shapes. Some are Ellipsoidal, some are prolate, and others are oblate. Sometimes, smaller groups of galaxies can be found inside bigger ones. These clusters can also contain different types of galaxies.A) Shapes of galaxy clusters
Galaxy clusters have various shapes, and their shapes are affected by many factors, like the dynamics of their galaxies, the gravitational forces, and the distribution of dark matter. The main shapes of galaxy clusters are ellipsoidal, prolate, and oblate clusters. 1) Ellipsoidal, prolate, and oblate clusters: The shapes of galaxy clusters can be determined through observations, such as the distribution of their galaxies, gravitational lensing effects, and X-ray emissions from hot gas within the cluster. Ellipsoidal, prolate, and oblate clusters are geometric shapes that galaxy clusters take. These shapes are characterized by the distribution of galaxies and dark matter along different axes. Ellipsoidal clusters are like elliptical clusters but have a more general ellipsoidal shape. This means that the cluster can be elongated or flattened in any direction, and its shape looks like an ellipsoid, which is a three-dimensional oval. The distribution of galaxies and dark matter is asymmetrical along all the three axes. Prolate clusters are elongated along one axis, while the other two axes are relatively shorter. The galaxies and dark matter are concentrated along the long axis of the prolate cluster, which results in an obvious elongation in one direction. Oblate clusters are flattened along one axis, with two longer axes. They have a disk-like appearance, and the galaxies and dark matter are concentrated in the central plane of the cluster, resulting in a flattened shape. The difference between prolate and oblate clusters appears in the orientation of the elongation or flattening relative to the observer's line of sight. If the elongation or flattening is oriented along the line of sight, it appears as an elongation or flattening when viewed from Earth. Prolate clusters have their longest axis aligned with the line of sight, while oblate clusters have their shortest axis aligned with the line of sight. 2) The Influence of Cluster Mergers and Dynamical Processes on Galaxy Cluster Evolution: Cluster mergers, also known as cluster collisions or interactions, happen when two or more galaxy clusters come together under the gravitational pull. These events are energetic and transformative. As clusters merge, galaxies, hot gas, and dark matter redistribute, changing the cluster's mass distribution and gravitational potential. An increasing amount of data has shown that many clusters are very complex systems. Optical analyses show that some clusters contain subsystems of galaxies suggesting that they are still in the phase of relaxation, sometimes after a phase of cluster merging. Simultaneously, the interaction generates shock waves in the intracluster medium (ICM), heating it and intensifying X-ray emissions. This heat affects gas dynamics and star formation in member galaxies. Cluster mergers also accelerate galaxy-galaxy interactions, changing galaxy morphology and star formation rates.B) Sizes and mass distribution
Galaxy clusters are giant cosmic entities, composed of galaxies, hot gas, and dark matter, and they are not static structures but dynamic structures that evolve on cosmic time scales. In this subsection, we discuss two important aspects of galaxy clusters: their size and extent, as well as their volume configuration and proportional relationships. We first discuss how galaxy clusters form and evolve and the importance of characterizing their characteristics and extent. Next, we explore the distribution of mass within these clusters and the scaling relationships that link the observable properties of a cluster to its total mass. Understanding these aspects has important implications for studies aimed at constraining cosmological parameters using galaxy clusters. 1) Characterizing cluster size and extent: Galaxy clusters form by the gravitational merger of smaller clusters and groups. Major cluster mergers are the most energetic events in the universe since the Big Bang. Characterizing the size and extent of galaxy clusters is essential in our quest to understand these massive cosmic entities. Galaxy clusters, composed of galaxies, hot gas, and dark matter, are not static but dynamic structures that evolve over cosmic timescales. Characterizing the size and extent of galaxy clusters is essential in astrophysics. Galaxy clusters vary in size, shape, and composition. The virial radius ( 𝑅𝑣𝑖𝑟 ) is a fundamental measure that represents the boundary within which the cluster's gravitational forces balance cosmic expansion. Optical richness estimates cluster size based on galaxy counts within specific magnitude ranges.C) Substructures within galaxy clusters
Substructures provide insights into galaxy formation on intermediate scales, from galaxy groups and subgroups to filamentary morphologies and the large-scale cosmic web. Particular attention is paid to central dominant galaxies and brightest cluster galaxies, examining their distinctive yet pivotal roles as nexus points shaping respective local environments through gravitation interactions and evolutionary processes across cosmic history. 1) Galaxy groups and subclusters: Galaxy groups and subclusters are important in understanding how galaxies come together and how gravity shapes the cosmos. These smaller structures also influence the properties and evolution of larger galaxy clusters. Galaxy groups are smaller gatherings of galaxies, they are part of the galaxy clusters where a few galaxies hang out together, bound by their mutual gravitational attraction. Galaxy groups usually contain fewer than 50 galaxies in a diameter of 1 to 2 megaparsecs (Mpc where 1 Mpc is approximately 3262000 light-years or 2×10^19^ miles). Their mass is approximately 10^13^ solar masses. Galaxy groups are less massive than galaxy clusters, and they have their special dynamics.D) Spatial distribution and clustering properties
Spatial distribution refers to how objects are distributed or arranged in space, specifically on large scales in the universe. It helps us understand the cosmic organization of galaxies, clusters, and other structures. Clustering properties provide insights into the degree and nature of objects' clustering in the universe. Two of the main clustering properties are the cluster-cluster correlation function and the power spectrum. They are analytical tools used in astrophysics and cosmology to study the large-scale distribution of matter, particularly galaxy clusters. 1) Large-scale spatial distribution of clusters: The large-scale spatial distribution of galaxy clusters is a pivotal aspect of the universe's structure. These clusters are organized along large filamentary structures within the cosmic web. Furthermore, the large-scale distribution of clusters offers insights into the formation and evolution of cosmic structures. Numerical simulations based on these observations help us understand how clusters form and evolve over billions of years. In essence, the study of galaxy cluster distribution provides a vital glimpse into the universe's vast and complex architecture, deepening our comprehension of its fundamental principles. 2) Cluster-cluster correlation function and power spectrum: The cluster-cluster correlation function, denoted as ξ(r), assesses the clustering of galaxy clusters by measuring how the probability of finding clusters at different separations deviates from random distribution. Positive ξ(r) values indicate clustering, while negative values signify anti- clustering. Studying ξ(r) at various scales helps in determining cosmological parameters and the distribution of dark matter. The power spectrum, represented as P(k), analyzes the matter's fluctuations on different spatial scales. By examining its shape, we gain insights into the nature of primordial density fluctuations, influencing the formation of structures like galaxy clusters. Observations of the power spectrum, whether from galaxy surveys or cosmic microwave background studies, provide crucial information for refining cosmological models and understanding the universe's fundamental properties. 3) Voids, superclusters, and cosmic variance effects Voids are vast, empty regions in space without galaxies or clusters. They are integral to the cosmic web, influencing matter distribution and the universe's expansion. The study of voids helps scientists test cosmological models and understand cosmic dynamics. Superclusters are colossal, gravitationally bound structures containing multiple galaxy clusters. They are interconnected by filaments and represent the densest regions in the cosmos. Superclusters provide insights into gravitational interactions and the distribution of galaxies. Cosmic variance effects arise from limited observational sampling, causing fluctuations in galaxy and cluster distribution. These variations can introduce uncertainties in cosmological measurements.V. CONSTRAINTS ON COSMOLOGICAL PARAMETERS
This section examines the constraints imposed on cosmological parameters. Through statistical methodologies like maximum likelihood estimation and Bayesian analysis, we unravel the intricacies of error estimation, unlocking insights into fundamental parameters shaping the cosmos by addressing systematic uncertainties and harnessing technological advancements.A) Maximum likelihood estimation and Bayesian analysis
In scientific analysis, error estimation is a crucial part. Whenever any parameter is estimated, the error must be estimated too. Estimation is a process of obtaining model parameters from randomly distributed observations. With the advancement of technology during the last decades, two popular statistical tools are used for estimation. These two tools are Maximum likelihood estimation and Bayesian analysis. The key difference between these two tools is that the parameters for maximum likelihood estimation are fixed but unknown, while the Bayesian method parameters act as random variables with known prior distributions. Bayesian Estimation has results more accurate than MLE, but it is also more complex to compute than MLE. Overall, these two techniques are important to cosmological research in the means of error estimation.B) Systematic uncertainties
Statistical uncertainty is caused by stochastic processes. Fluctuations brought on by measurement errors are founded on a constrained number of observations. Because of this, a collection of observations from the same occurrence will differ from measurement to measurement. The statistical uncertainty is a gauge of the range of this variation. The term "statistical differences between" similar measurements of the same event were made twice are not connected. 1) Biases from sample selection and cluster misidentification: Any propensity that inhibits a question from being considered objectively is referred to as bias. Bias in research happens when one outcome or response is favorably chosen or encouraged above others by systematic inaccuracy brought into sampling or testing. In astronomy, there are three main types of bias: Malmquist bias, Bias frame, and Perturbative bias expansion. Malmquist bias is an effect in observational astronomy to the preferential detection of bright objects. A bias frame is essentially a zero- time length exposure. It can be used to adjust oblique frames when an exact time match with a light frame is unavailable. Perturbative bias expansion is a way to describe the clustering of galaxies on large scales by a finite set of coefficients of the expansion, called bias parameters.C) Constraints on cosmological parameters
New technology and advancement in research led to the establishment of a precise cosmological model with cosmological parameters with an accuracy of around 10%. "Cosmological parameters" term refers to the parameters of the dynamics of our universe, such as the universe's curvature and expansion rate. 1) Density parameter $(\Omega_m)$: The density parameter is the ratio of the density of matter and energy in the universe to the critical density (the density at which the universe will stop expanding). The density used in this ratio is the sum of the Byronic matter's density parameter, dark matter's density parameter, and dark energy's density parameter. When this ratio is less than one, the universe will be open and expanding forever. If more than one, the universe will stop expanding and recollapse. If this ratio is equal to one, the universe is flat: has enough energy to stop expanding but not enough for Recollapsing. To the accuracy of current studies, the universe has a density parameter equal to 1.D) Friedmann's equations
Friedmann's equations are a set of equations that describe the expansion of the universe due to its matter content. Friedmann's equations provide models that describe many observational features in the universe. In the following paragraphs, we discuss the cosmological principle and constants, and we derive the first Friedmann equation. 1) The cosmological principle and Einstein's field equation: The cosmological principle states that our universe is isotropic and homogeneous. The isotropic property of the universe means that on large scales the universe looks the same in all directions which leads to no preferred direction to look at. On the other hand, homogeneous means that the universe looks the same from all locations, on large scales, which means that there's no preferred location. The cosmological principle is a fundamental part of astrophysical research as it describes the main assumptions of the universe's structure. Einstein's field equations (EFE) are a set of equations that describe the interaction of gravitation. They consist of ten non-linear partial differential equations. In this paper, we will use the mostly negative matrix convention, hence, making a negative sign before the cosmological constant. Moreover, the energy- momentum tensor has negative signs on the spatial diagonal terms. However, using any convention will lead to deriving the same equations. Equation 3 represents the Einstein field equation.VI. DISCUSSION AND IMPLICATIONS
In this section, we further discuss and make sense of the reviewed literature regarding the ΛCDM model, as well as compare it with other space probes. By examining the connection between the observed properties of galaxy clusters and the predictions of the ΛCDM model, we can confirm and evaluate the robustness of this standard model of the universe. Additionally, we briefly mention the results of galaxy cluster studies with other space probes, such as observing supernovae, measuring cosmic microwave background anisotropy, and weak gravitational lensing.A) Implications for the ΛCDM model
Validation of the ΛCDM model is essential to establish its credibility as a reliable framework for understanding the structure and evolution of the universe. By examining the observed properties of galaxy clusters against the predictions of the ΛCDM model, we can assess the degree of alignment and consistency between theory and observations. These implications not only strengthen our confidence in the ΛCDM model but also highlight areas of divergence and the need for further research into alternative models or extensions of the current framework. 1) Validating the standard cosmological paradigm: The review demonstrates the alignment between the observed properties of galaxy clusters and the predictions of the ΛCDM model, thus providing validation for the standard cosmological paradigm. For instance,B) Comparison with other cosmological probes
In this subsection, we compare the results of studying galaxy clusters with other space probes to gain a more complete understanding of the universe and its fundamental dynamics. Additionally, we study the link between the properties of galaxy clusters and measurements of cosmic microwave background anisotropy, which provide insight into the early universe and the formation of oscillations primitive. By analyzing these different space probes, we can validate the ΛCDM model from many different perspectives and gain a more complete understanding of the structure and evolution of the universe. 1) Supernova observations and cosmic acceleration: The literature review findings regarding the relation between galaxy clusters and the ΛCDM model align with the observations of supernovae and the phenomenon of cosmic acceleration. The discovery of the accelerated expansion of the universe, inferred from measurements of Type Ia supernovae, provides significant support for the predictions of the ΛCDM model.VII. CONCLUSION
The ΛCDM model and galaxy clusters are key to understanding the universe's structure and evolution. The ΛCDM model, which includes dark matter and dark energy, helps reconcile theoretical predictions with observational evidence. Cosmological parameters derived from this model reveal the universe's properties, including its matter density, dark energy density, and curvature. Galaxy clusters provide insights into the formation and evolution of cosmic structures. Observational methods like optical observations, X-ray observations, and gravitational lensing techniques offer valuable data on galaxy clusters. However, challenges such as biases, measurement uncertainties, and contamination need to be addressed. The study of galaxy clusters can help determine cosmological parameters like the matter density parameter (Ωm), dark energy equation of state (w), and the Hubble constant (H0). The constraints on cosmological parameters obtained from galaxy clusters can be compared with other observational probes, such as observing supernovae, cosmic microwave background anisotropy, and weak gravitational lensing. The study of galaxy clusters and their significance for cosmology opens exciting prospects for future research. Future research must include improvements in the observational methods and techniques, advancing these tools will allow inferred data with low errors. These improvements could include the development of new high-resolution imaging, spectroscopic techniques, and multi- wavelength observations that provide insights into the interactions between baryonic matter and dark matter within galaxy clusters. A more complete understanding of baryonic physics will lead to more accurate predictions. This synergy will allow for cross-validation of results and tighter constraints on cosmological parameters. Galaxy clusters can also be used to test alternative cosmological models and exotic physics scenarios, providing insights into the validity of the ΛCDM model.VIII. REFERENCES
Abstract Nowadays, the internet plays a major role in people's lives, especially in the formation of relationships in society. One major form of interaction that gained a lot of space because of the internet is the parasocial relationships, which is defined as one-sided connections with celebrities, media figures, or characters. Considering how much space social media figures have in adolescent's routines and the fact that adolescents (approximately ages 10—22) are more easily influenced and emotionally reactive due to the maturing of the prefrontal cortex, it is important to investigate the relationships they are building and the content they are consuming. Because of that, this paper seeks to expand knowledge on the nature of parasocial relationships by investigating the effect of these relationships on Brazilian adolescents and young adults. The goal is to investigate how they impact their relationships, emotions, and behaviors. The research findings were based on analyses and discoveries made through a questionnaire designed by the writers and observational research conducted through the study of previous works. Through that, both positive and negative effects of parasocial relationships were discovered. It was found that the adolescents were indeed influenced, so strongly, in fact, that there was an increase in their consumerism habits according to what was promoted by their idols. However, parasocial relationships were not the main reason for young people's screen time, differing from the common belief. Additionally, these relationships have provided comfort to most participants, supporting the hypothesis that many parasocial relationships are established because they help individuals.
I. Introduction
II. Literature Review
Adolescence (approximately ages 10—22 years) - defined as a transitional period between childhood and adulthood — is marked by changes in social interaction, acquisition of mature cognitive abilities, and behavioral developmentIII. Methodology
As described before, the study aims to investigate and understand the effect of parasocial relationships on the personal interactions, emotions, and behaviors of Brazilian adolescents and young adults aged 12 to 22. In order to do this, an online survey — which included multiple questions — was developed and distributed through different social media platforms for random and anonymous groups of teenagers to guarantee a diverse representation of Brazil's adolescents. The survey data was collected from August 15, 2023, to August 29, 2023 (a period of two weeks) through the Google Forms platform and had 83 participants, from which 84,3% (70 participants) were female, 13,3% were male (11 participants) and 2,4% (2 participants) prefer not to identify themselves. In the study, out of the 27 states (including the Federal District) that are part of Brazil, 20 were represented by one or more participants. The states that were represented included: Alagoas (AL), Amazonas (AM), Bahia (BA), Distrito Federal (DF), Espírito Santo (ES), Goiás (GO), Mato Grosso (MT), Minas Gerais (MG), Pará (PA), Paraíba (PB), Paraná (PR), Pernambuco (PE), Piauí (PI), Rio de Janeiro (RJ), Rio Grande do Norte (RN), Rio Grande do Sul (RS), Rondônia (RO), Santa Catarina (SC), São Paulo (SP) and Tocantins (TO). The states that were not included were Acre (AC), Amapá (AP), Ceará (CE), Maranhão (MA), Mato Grosso do Sul (MS), Roraima (RR) and Sergipe (SE). The survey counted with twenty-six questions formulated in Portuguese, which could be taken in about three to seven minutes. The questions and options given to the participants were described in the order below: How old are you? (12) (13) (14) (16) (17) (18) (19) (20) (21) (22) What is your gender? (Female) (Male) (Prefer not to say) What city and state are you from? Note: Answer following the Salvador-BA model; Rio de Janeiro-RJ, etc. (Open answer) Do you consider yourself a fan of any celebrity/fictional character? Note: This includes characters from books, movies, series, dramas, anime, drawings, and celebrities of various types, athletes (Football, Basketball, Formula One, Volleyball, etc.), actors, singers, dancers, etc. (Yes) (No) If you consider yourself a fan, which category does your celebrity(s)/fictional character(s) fall into? (Fictional character(s) from shows, sitcoms, movies, books, animes, doramas, cartoons, etc.) (Athlete(s)) (Singer(s)) (Actor(s) (Actress(es)) (Dancer(s)) (Others) How often do you follow the news and social media of these celebrity(s)/fictional character(s)? (Never) (Rarely) (Frequently) (Always) On average, how many hours do you spend on your cell phone? (Less than 1h) (1h-3h) (3h-5h) (5h-7h) (10h or more) How much of the time you use your cell phone do you spend interacting and/or checking the social networks of your favorite celebrity(s) and/or fictional character(s)? (Less than 1h) (1h-3h) (3h-5h) (5h-7h) (7h-9h) (10h or more) Do you usually interact on social media with these celebrity(s) through comments on lives/posts and private messages? (Yes) (No) Have you created any page/fan club dedicated to this(these) celebrity(s)/fictional character(s)? (Yes) (No) Do you know a lot about the life of this celebrity(s)/fictional character(s)? Note: Check yes if you know a lot about the personal life, journey and background of this celebrity(s)/fictional character(s) (Yes) (Somewhat) (No) Do you feel close to this celebrity(s)? Does this person(s)/character(s) bring a sense of familiarity and belonging? Note: Mark yes if you feel like you really have a close relationship with the person you accompany. (Yes, a lot) (A little bit) (No) Does your relationship(s) with the celebrity(s)/fictional character(s) influence(s) your everyday life? Note: Answer yes if you have changed your way of thinking about certain subjects and adopted new habits/behaviors/quirks since you started following this celebrity(s)/fictional character(s) (Yes) (Somewhat) (No) Have you ever bought something influenced by the celebrity(s)/fictional character(s) you follow? (Yes) (No) (No, but I wanted to/think about it) Do you feel offended when someone bad mouths/offends the celebrity(s)/fictional character(s) you follow? (Yes) (No) Have you ever gotten into a fight/argument with someone for offending the celebrity(s)/fictional character(s) you follow? (Yes) (Yes, including with close friends and family) (No) Have you ever had positive mood swings because of any interaction/post from the celebrity(s)/fictional character(s) you follow? (Yes) (No) Have you ever had negative mood swings because of any interaction/post from the celebrity(s)/fictional character(s) you follow? (Yes) (No) Have you ever been to an event (show, theater, meet and greet, etc.) to see/follow the celebrity(s)/fictional character(s) you follow? (Yes) (No) Have you ever been disappointed by the celebrity(s)/fictional character(s) you follow/support? (Yes) (No) Do you regret any decisions made under the influence of the celebrity(s)/fictional character(s) you follow? (Yes) (No) Do you have any mental illness? (Yes) (No) If you suffer from a mental illness, please select all that apply. (Anxiety) (Depression) (Eating disorders) (Obsessive-compulsive disorder- OCD) (Bipolar disorder) (Schizophrenia) (Post-traumatic stress disorder- PTSD) (Borderline personality disorder) (Others) (I don't suffer from any mental illness) Do you think that following a celebrity(s)/fictional character(s) might have a positive effect(s)/negative effect(s) on the development of the mental illnesses you suffer from? (Does not apply (I do not suffer from any mental illness) (No, there were no changes in my psychological state) (Yes, positive effects) (Yes, negative effects) (Yes, both negative and positive effects) Are you familiar with the concept of parasocial relationships? (Yes) (No) Do you identify with the following definition of a parasocial relationship? One-sided relationships established with celebrities, fictional characters, and digital influencers in which one individual exerts time, interest, and emotional energy on another person who is totally unaware of their existence. (Yes) (No) After each participant answered the questions, the data collected was automatically transformed into a spreadsheet and graphs through the Google Forms platform to identify patterns and correlations among participants. Considering the limitations of the research, it was decided to follow a descriptive and correlational research design.IV. Results
The data collected from 83 Brazilian adolescents - in which 70 were women (84.3%), 11 were men (13.3%), and two said that they preferred not to identify themselves (2.4%) — ranging from 12 to 22 years old resulted in the following data collection: Note: Before reading, be aware that the data might not completely represent all Brazilian adolescents because of the research limitations described before. The research showed that 4.8% (4 participants) of the adolescents were 12 years old; 1.2% (1 participant) were 13 years old, 12% (10 participants) were 14 years old; 8.4% (7 participants) were 15 years old; 19.3% (16 participants) were 16 years old; 8.4% (7 participants) were 17 years old; 10.8% (9 participants) were 18 years old; 9.6% (8 participants) were 19 years old; 13.3% (11 participants) were 20 years old; 7.2% (6 participants) were 21 years old and 4.8 (4 participants) were 22 years old. From the total 83 participants, 97.6% declared that they considered themselves fans of a celebrity or a fictional character, while only 2.4% of those surveyed denied such. Of those who informed that they were fans of some celebrity or fictional character, 75.9% were fans of fictional characters (which included characters from shows, sitcoms, movies, books, animes, doramas, cartoons, etc.; 22.9% were fans of athletes (s); 65.1% were fans of singers, 42.2% were fans of actress and actors; 6% were fans of dancers and 4.8% declared that were fans of other types of celebrities, such as journalists, digital influencers and YouTubers. It is essential to know that participants could select more than one category in this part of the research to gather data about the different categories participants were interested in. 57.8% of the participants stated that they frequently keep up with the news and social media of their favorite celebrities or fictional characters. 22.8% said they rarely keep up, and 21.7% always do. Only 2.4% answered never.V. Discussion
As shown by the data presented, most of the adolescents and young adults who participated in the study expressed that they, in different amounts, were fans of some celebrities/fictional characters. Only 2 of the 83 participants declared they did not consider themselves fans of anyone. When analyzing which category of media figures were most common to attract teenagers, it was observed that fictional characters from shows, sitcoms, movies, books, animes, doramas, cartoons, etc., were the ones that most attracted people, with over 75.9% of participants indicating that they were fans. The other category with a high number of fans was singers, with a percentage of 65.1%, followed by actors/actresses with 42.2%. This points out that for the sample group surveyed, the groups that most can impact and influence teenagers in the present are fictional characters, singers, and actors/actresses, which also might be seen in future studies on a large scale if replicated. Although adolescents in the studies showed different frequency levels of following the celebrities/fictional characters on social media, it is possible to observe that most of them did that, with only 2.4% saying that they never accessed the celebrities/fictional characters' social media accounts. This data indicates that most adolescents, although on different levels, use the internet to connect and engage with celebrities and characters they like to follow. Regarding the number of hours spent on the cell phone per day, most declared having three hours or more (over 78.1% participants), with just 21.7% declaring an average of 1h-3h per day and none declaring less than 1 hour on the cell phone. The majority of participants (39.8%) declared having 3 to 5 hours of screen time, which agrees with similar data shown in a more detailed Brazilian study conducted in 2019, which stated that teenagers' average time was about 5.8h on weekdays and 8.8h on the weekendsVI. Conclusion
The evidence proves that parasocial relationships have essential effects on the personal interactions, emotions, and behaviors of Brazilian adolescents and young adults, both positive and negative. In the digital era, these connections have become more common and start as early as childhood, directly affecting the child's character development, which becomes even more apparent in behaviors during adolescence. These relationships increase the vulnerability of adolescents and young adults since they become easily influenced, as shown through the increase in their consumerism habits, buying what is promoted or sold by their "idols." However, different from the common belief, according to the collected data, parasocial relationships are not the main responsible for young people's screen time. Additionally, it is possible to say that these relationships have provided positive emotions to most participants, supporting the hypothesis that many parasocial relationships are established because they help individuals cope with their emotions, which is a benefit. It is also important to emphasize that even though most participating adolescents and young adults (12—22) claimed to have developed these relationships, most did not know what the term meant, proving the lack of studies on the issue in Brazil as well as the little reaching of the already existing knowledge to the grand masses, reinforcing the importance of this study.VII. Acknowledgments
We want to express our deepest gratitude to our mentor, Salma Elgendy, for her tireless and helpful patience and guidance in writing this paper. Her assistance and encouragement were indispensable and made this endeavor possible. We also would not have been able to achieve such a milestone without the support of the Youth Science Journal, which gave us the opportunity and resources needed to produce our research paper with excellence. Lastly, we would be remiss in not mentioning our gratitude to our friends and family, especially our parents, who have always provided the love and unconditional belief we needed. Their support was our biggest motivation in this process and our personal development.VIII. References
Abstract As of 2023, 30,000 websites are hacked daily, and 64% of companies worldwide have experienced at least one form of cyber-attack. As hacking increases, the need to implement a secure security system increases. This paper discusses the implementation of a security system that combines Cloudflare's API, Akamai's API, and machine learning (ML) algorithm. Machine learning and deep learning algorithms were used to determine which one gets the best accuracy. However, the XG Boost classifier determines the highest accuracy since it can deal efficiently with large datasets and its ensemble learning. Also, the XG Boost model was recompiled to interact with other APIs. This project can be used like any other API, but this project provides the features of the two used APIs, their security layer, and the ML algorithm. The secondary research, e.g., research papers and datasets, method is used to get all data used in the paper or for the implementation of the project. Qualitative data plays a crucial role in elucidating the characteristics and functionality of APIs- especially Cloudflare's and Akamai's APIs-. It is employed to articulate the purpose and mechanics of APIs, delineating how they function and their intended usage. The IPs were collected to be used as training data for the ML model. The data is filtered (removing the uncompleted IPs) and examined randomly to ensure its quality. The result was significant as the accuracy of this project was 97.6%. Therefore, the faults and bugs in Cloudflare's API and Akamai's API were fixed, enhancing the security of many datasets.
I. Introduction
Due to the existence of the great growth in technologies, the need to implement a secured network is increasing. The gross usage of computerized systems has raised critical threats with hackingII. Abbreviation Table
Words | Abbreviations |
---|---|
Machine learning ML | Deep Learning DL |
Internet Protocol IP | Content Delivery |
Network | CDN |
Applicationprogramming interface | API |
Distributed denial of service | DDOs |
Domain Name system | DNS |
Logistic Regression | LR |
Recurrent Neural Network | RNN |
Extreme Gradient Boosting | XGBoost |
Secure Sockets layer | SSL |
Transport LayerSecurity | TLS |
Structured Query Language | SQL |
cross-site scripting | XSS | web application firewall | WAF |
III. Application programming interfaces (APIs)
The research question of this paper is "How to implement an impenetrable API by combining Cloudflare's and Akamai's APIs and machine learning model." Implementing such a project will increase the security of the API and provide the features of the two APIs.1. API
2. Cloudflare
3. DNS
DNS Configuration: Update your domain's DNS records to point to Cloudflare's DNS servers. Cloudflare will then manage your domain's traffic. The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain names, such as nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources4. Content Delivery
Cloudflare, a leading content delivery network (CDN) provider, leverages its extensive global network infrastructure to optimize content delivery and enhance the performance of websites and web applications. By strategically distributing cached content across its network of data centers worldwide, Cloudflare reduces latency and accelerates the delivery of static and dynamic web content to end-users. Through its edge server architecture, Cloudflare efficiently caches and serves web assets, including HTML pages, images, videos, and other multimedia content, ensuring fast and reliable access regardless of the user's geographical location. Additionally, Cloudflare's content delivery capabilities include intelligent routing algorithms that dynamically route traffic through the fastest and most reliable network paths, further improving response times and minimizing packet loss. With its robust content delivery network, Cloudflare empowers organizations to deliver a seamless and responsive user experience, optimize web performance, and scale their online presence to meet growing demands effectively5. Security and load balancing
Cloudflare offers robust load-balancing solutions designed to efficiently distribute network traffic across multiple servers or data centers. With customizable routing policies and advanced traffic management features, Cloudflare ensures high availability, scalability, and reliability for web applications and services. Leveraging its global anycast network, Cloudflare intelligently directs incoming requests to the nearest and most optimal server location, minimizing latency and delivering a fast user experience. Additionally, Cloudflare's load balancing features include health checks, failover mechanisms, and traffic shaping rules, enabling proactive monitoring of server health, automatic traffic rerouting during failures, and prioritization of critical traffic during peak demand. On the cybersecurity front, Cloudflare provides a robust suite of security solutions aimed at protecting websites and web applications from various cyber threats. Leveraging its global network infrastructure, Cloudflare offers distributed denial- of-service (DDoS) protection, shielding against large-scale attacks that aim to disrupt online services. Additionally, Cloudflare offers a web application firewall (WAF) that helps filter and block malicious traffic, safeguarding against common web application vulnerabilities such as SQL injection and cross-site scripting (XSS) attacks. Cloudflare's security offerings also include bot management tools to identify and mitigate automated threats, ensuring legitimate users can access online resources without interference. As illustrated in Figure (3). As Bumanglag and Kettani stated, "Moreover, Cloudflare's SSL/TLS encryption capabilities help secure data transmission between clients and servers, protecting sensitive information from interception and unauthorized access. With its comprehensive suite of load balancing and security features, backed by a global network infrastructure, Cloudflare empowers organizations to fortify their online presence, maintain operational resilience, and safeguard their digital assets against evolving cybersecurity threats while optimizing web performance and user experience."6. Akamai
Akamai stands as a leading content delivery network (CDN) provider. Their reputation is built upon a comprehensive suite of services meticulously designed to refine the entire internet experience. At the core of their offerings lies a powerful combination of optimized content delivery and application acceleration. This translates to a user experience characterized by lightning-fast loading times, smooth web interactions, and robust security measures across the web. To achieve these results, Akamai incorporates a range of performance optimization features. These include asset compression, image optimization, and resource minification, all working in concert to dramatically reduce load times for websites and applications. The tangible benefit? A seamless and responsive user experience — a critical factor in driving user engagement and satisfaction. But Akamai's expertise extends beyond content delivery. They are specialists in application acceleration as well. This goes beyond simply delivering content quickly. By employing advanced techniques like caching, compression, and route optimization, Akamai meticulously minimizes latency and enhances the responsiveness of web applications and APIs. The results are undeniable: a significant improvement in user experience, a surge in user engagement, and ultimately, business growth for organizations that leverage the power of Akamai's services7. Edge Server
Akamai's edge server configuration stands as a cornerstone of its global network, meticulously designed to facilitate efficient content delivery across the digital landscape. With thousands of strategically positioned edge servers dispersed throughout data centers worldwide, Akamai ensures that content is seamlessly caught and served to end- users with exceptional reliability and performance. These edge servers are meticulously configured to optimize content delivery, leveraging caching mechanisms to store frequently accessed content locally. By doing so, Akamai minimizes latency and accelerates content delivery, guaranteeing swift access to resources regardless of users' geographical locations. This strategic approach not only enhances user experiences but also bolsters overall reliability, as Akamai's edge servers are adeptly prepared to handle peak traffic periods and unforeseen surges in demand. Through the utilization of Akamai's edge server infrastructure, organizations can confidently deliver content and applications with minimal latency and maximum availability, thereby establishing a robust digital presence capable of meeting the dynamic needs of modern users8. Security Solutions
IV. XGBoost:
XGBoost stands for Extreme Gradient Boosting. Some optimizations used include regularized model formalization to prevent overfitting and tree pruning to reduce model complexity. Due to its efficient tree-boosting algorithm and regularized Model technique, XGBoost models often achieve better accuracy than other machine learning algorithms. The models can handle complexity through hyperparameters like learning rate and number of boosting iterations. The most important factor behind the success of XGBoost is its scalability in all scenarios. The system runs more than ten times faster than existing popular solutions on a single machine [16]. For example, testing the SVM (support vector machine) model on the data took more than an hour to run, but XGBoost ran in 41 seconds. XGBoost model scales billions of examples in distributed or memory- limited settings. The scalability of XGBoost is due to several systems and algorithmic optimizations. These innovations include the following: a novel tree learning algorithm for handling sparse data; and a theoretically justified weighted quantile sketch procedure that enables handling instance weights in approximate tree learning. Parallel and distributed computing makes learning faster, enabling quicker model exploration [16]. Furthermore, it is an ensemble learning method; in other words, XGBoost combines the predictions of multiple weak models to produce a stronger one. All the above make the XGBoost robust and improve its accuracy.V. Methods
i. Integration Between Akamai's and Cloudflare's API:
Combining Akamai, Cloudflare, and AI can create a powerful solution that enhances the performance, security, and intelligence of your web applications. To create this new complex security, the application needs, traffic patterns, and potential AI Cases must be understood in the network. The code snippet is a Python script that integrates data from Akamai and Cloudflare, two content delivery network (CDN) providers. The script fetches data from Akamai using its API, applies AI-generated insights to this data, and then is expected to apply these insights to Cloudflare using its API. Importing Libraries: The script imports several Python libraries, including requests for making HTTP requests, pandas to load the data in data type pandas data frame, sklearn for machine learning tasks, XGBoost for gradient boosting, and imblearn for imbalanced data handling as shown in Figure (5). From Figure (6): Akamai and Cloudflare API Configuration: Configuration parameters for Akamai and Cloudflare API endpoints and API keys are defined at the beginning of the script. In Figure (7) get_akamai_data Function: This function sends an HTTP GET request to the Akamai API endpoint using the provided API key for authorization. It expects a JSON response and returns the fetched data. Apply_insights_to_cloudflare Function as shown in Figure (8): This function is a placeholder and lacks implementation. It is intended to apply insights generated by AI to Cloudflare. The specific logic for interacting with the Cloudflare API and applying insights needs to be implemented within this function. generate_ai_insights Function: This function is also a placeholder and lacks implementation. Its purpose is to generate AI-driven insights based on the data obtained from Akamai. The specific AI logic for generating insights is missing in the code, and it needs to be implemented. Figure (9) on these insights. However, significant implementation work is required for the generate_ai_insights and apply_insights_to_cloudflare functions to make the script functional. Additionally, any AI model or logic used for generating insights needs to be incorporated into the code. Main Function: The main function is the entry point of the script. It calls get_akamai_data to retrieve data from Akamai and stores it in the akamai_data variable. It then calls generate_ai_insights to generate AI-driven insights based on the Akamai data. However, the implementation of this function is incomplete, so it does not currently generate any insights. Finally, it calls apply_insights_to_cloudflare to apply these insights to Cloudflare, although this function is also incomplete and does not perform any actual actions on Cloudflare. Figure (10).ii. Machine learning implementation:
Data Collection and Preparation: The data is collected from a programmer in a cybersecurity company from Git Hub. The collected data was in the form which is in Table (2); there is an ML rule that states that when the number of features increases, the accuracy is enhanced.IP | Case (safe or suspicious) |
---|---|
18.148.223.130 | Safe |
75.39.229.204 | Safe |
154.41.195.168 | Safe |
IP1 | IP2 | IP3 | IP4 | Case |
---|---|---|---|---|
18 | 148 | 223 | 130 | 1 |
3 | 60 | 243 | 123 | 0 |
127 | 147 | 158 | 152 | 0 |
iii. Monitoring and Analytics:
Using Akamai and Cloudflare's monitoring tools to gain insights into applications - network performance, user behavior, and security threats. Integrate analytics tools to track user engagement, conversion rates, and other relevant metrics.iv. Redundancy:
Configuring failover mechanisms using both Akamai and Cloudflare's load balancing and traffic management features. Ensure that if one service experiences downtime, traffic seamlessly shifts to the other without major disruptions.v. Optimization:
Recompilation of XGBoost -ML- model based on feedback and performance data and added it to the integration of APIs. Finally, testing it on different strategies using Cloudflare's API tester, which allows to test the project on a real website.VI. Results
i. Negative results:
ii. Positive results:
VII. Discussion
From the result section, it can be inferred that the project addressed the research question. Furthermore, it provides more features to the users. The project is highly significant for organizations that look for protection of their digital assets and data, enhance their online presence, and mitigate various cyber threats. Also, leverage both Cloudflare and Akamai security features to create a multi-layered security approach. Implement Distributed Denial-of-Service Attack " DDoS " mitigation, bot protection, and OWASP " The OWASP Foundation " top 10 security measures through both platforms. Making them work together will be more secure than each of them alone as they filter out malicious traffic and ensure that the web servers remain available and responsive during an attack. In addition to the two APIs that can detect anything anomalous, the ML model also supports them to do that effectively. The project system can analyze login attempts and detect patterns that suggest cyberattacks, helping to prevent unauthorized access to systems and accounts.Limitations:
The training data has IPv4. Now, there is IPv6, so when an IPv6 sends requests, machine learning won't be practical in this case since the ML trained on data has IPv4. However, that won't have strong negative impacts as IPv6 isn't familiar nowadays. Also, that is the reason why the chosen training data is IPv4. As the system is complicated, it requires experts to be ready for any unexpected errors. Although the system has high accuracy, the user should always be ready for anything.Recommendations:
Although the familiar IPs are IPv4, the security system should be prepared for all cases. So, it is recommended to train the machine learning model on training data of IPv6. The main cause that this paper didn't use IPv6 in the training data is there is no huge dataset of IPv6.VIII. Conclusion
Nowadays, hacking is increasing. Also, many security systems are hacked, leading to losing data and money. Therefore, the significance of this paper appears since it discusses the implementation of a new security system. It is the combination of Cloudflare's and Akamai's APIs and machine learning. This system has the security layers and features of the two APIs. The two APIs were combined. Then, the recompiled machine learning algorithm was added to them. The XG boost model-machine learning- role is to determine whether the IP is safe or suspicious. The collected data of the ML model is greater than 30 thousand IPs. Although the collected data is imbalanced and large, XG boost deals with it effectively. In addition, it is an ensemble learning method, which increases the accuracy. Then, the project was tested on a real website to simulate its real implementation. The project proved its competence as its accuracy is 97.6%. This accuracy is too high when compared with the accuracy of Cloudflare's API or Akamai's API alone. Finally, the project provides the features of both APIs, their security layer, and the ML to ensure security and enhance the APIs' industry.IX. References
Appendix
A. Training dataset [1] Miroslav Stampar, "IPs collection," 2024. [Online]. Available: https://github.com/stamparm/ipsum/tree/master/levelsAbstract This study investigates the neural basis of schizophrenia and its implications for treatment development. Schizophrenia is a complex mental disorder characterised by hallucinations, delusions, and cognitive impairments. Neuroimaging studies consistently show abnormalities in brain regions involved in cognition and sensory processing. Genetic factors and environmental influences contribute to the risk of developing schizophrenia. Current treatments aim to address neural network dysfunction and symptom management. However, the findings emphasise the need for personalised and innovative treatments, ethical considerations, and continued research to enhance understanding and patient outcomes. The study recognizes the heterogeneity of schizophrenia and the importance of tailoring interventions to individual patients. Ethical considerations surrounding the treatment of schizophrenia patients are also highlighted, emphasising the significance of patient-centred care. Ongoing research efforts are crucial to deepen our understanding of the disorder, unravel complex neurobiological mechanisms, and develop novel interventions. By integrating scientific inquiry with compassionate care, we can work towards a future where individuals with schizophrenia can lead fulfilling lives and reach their full potential. The study underscores the urgency of advancing our knowledge and developing effective treatments to improve the lives of those affected by schizophrenia.
I. Introduction
Schizophrenia is a complex mental disorder characterised by hallucinations, delusions, cognitive impairments, and social dysfunction. It stands apart from other psychiatric conditions due to its unique symptomatology. Understanding the underlying neural mechanisms of schizophrenia is crucial for the development of innovative treatments that can effectively alleviate symptoms and enhance the quality of life for individuals affected by the disorderII. Understanding Schizophrenia
Schizophrenia is a psychotic disorder characterised by disturbances in perception, cognition, and social functioning. It affects approximately 0.32% of the global populationIII. Neural Basis of Schizophrenia
IV. Neuroimaging studies on Schizophrenia
V. Genetic Factors and Neural Network Dysfunction in Schizophrenia
Although the exact causes of schizophrenia are still unknown, genetic factors are widely believed to play a significant role in its developmentVI. Environmental Factors and Neural Network Dysfunction in Schizophrenia
In addition to genetic factors, environmental influences during the prenatal and perinatal periods have been linked to abnormal brain development and an increased risk of developing schizophrenia. Factors such as low birth weight, premature labour, and birth complications like asphyxia have been associated with disruptions in neural circuitry and an elevated risk of developing the disorderVII. Current treatments
Moving on to current treatments for schizophrenia, despite the limited understanding of the underlying neural mechanisms, treatment approaches aim to address the abnormalities in neural networks associated with the disorder. Treatment options for schizophrenia include medications, psychosocial interventions, and electroconvulsive therapy. Medications play a significant role in managing schizophrenia symptoms. They include both first-generation and second-generation antipsychotics, each with different neurological side effects and costs. Examples of first- generation antipsychotics are Chlorpromazine and Fluphenazine, while second-generation antipsychotics include Aripiprazole (Abilify), Asenapine (Saphris), and Brexpiprazole (Rexulti)VIII. Methodology
Regarding the methodology employed in schizophrenia research, several key steps are involved. These steps ensure the systematic collection and analysis of data to advance our understanding of the disorder. Firstly, neuroimaging techniques, such as magnetic resonance imaging (MRI) and functional MRI (fMRI), are utilised to examine the structural and functional brain abnormalities associated with schizophrenia. Stringent selection criteria are implemented to ensure the representative nature of the sample. Molecular investigations play a crucial role in exploring the genetic and environmental factors associated with the development of psychotic illnesses. Advanced techniques, including gene expression profiling and epigenetic analyses, are employed to investigate these factors. Systematic data extraction and organisation are vital in synthesising relevant information from selected studies and sources. The collected data is then categorised according to specific research domains, such as neuroimaging findings, genetic data, and environmental factors. The collected data is subjected to modern statistical analysis methods to identify significant patterns and relationships within the neuroimaging and molecular data. Techniques such as voxel-based morphometry, functional connectivity analysis, and gene expression quantification are employed to analyse structural and functional brain abnormalities. Integration of findings from neuroimaging and molecular investigations allows for a comprehensive understanding of the underlying neural mechanisms of schizophrenia. By identifying potential intersections and correlations, researchers can gain insights into the complex nature of the disorder. Throughout the research process, strict adherence to ethical considerations is of utmost importance. Respecting participant confidentiality, obtaining informed consent, and responsibly using genetic and personal data are essential aspects of ethical research practices.IX. Results
In terms of results, studies have revealed various key findings related to schizophrenia: Neurochemical Imbalances: The neurobiology of schizophrenia is influenced by neurotransmitter abnormalities. The dopamine dysregulation theory suggests that the activation of dopamine D2 receptors, particularly in the mesolimbic pathway, contributes to positive symptoms such as hallucinations and delusions. PET scans have demonstrated enhanced dopamine receptor binding in specific brain areas, supporting this hypothesis. Conversely, dopamine receptor hypofunction in the prefrontal cortex has been associated with cognitive deficiencies. Additionally, the glutamate hypothesis proposes that decreased NMDA receptor activity leads to glutamate hypofunction, affecting neuroplasticity and contributing to cognitive and affective symptomsX. Discussion
In the discussion of these findings, several important points emerge: Integration of Neurochemical and Structural Findings: The combination of neurochemical imbalances and structural brain abnormalities underscores the aetiology of schizophrenia. Dopamine dysregulation affects both positive and negative symptoms, while glutamate hypofunction impacts neuroplasticity and cognitive deficits. Structural brain anomalies disrupt neuronal circuitry, contributing to the presentation of symptoms.XI. Conclusion
In conclusion, this study sheds light on the intricate neural processes underlying schizophrenia, focusing on neurochemical imbalances, structural brain abnormalities, and the intricate interplay of hereditary and environmental factors. The comprehensive understanding of schizophrenia's neurology paves the way for groundbreaking treatment approaches. Successful translation of research findings into effective therapies necessitates collaboration among researchers, medical professionals, and pharmaceutical companies. Ongoing research into immune regulation, early intervention, and personalised therapy is of paramount importance to enhance patient outcomes. Moreover, the significance of comprehending the neuroscience of schizophrenia extends beyond the disorder itself, encompassing broader implications for mental health. Ethical considerations call for the responsible utilisation of emerging technologies and equitable access to therapy. The pursuit of improved therapies is driven by the aspiration to enhance the quality of life for individuals with schizophrenia and their families.XI. References
Abstract This research paper summarizes the findings of previous research about 3D bioprinting in tissue engineering. Biofabrication, particularly in the field of regenerative medicine and 3D in vitro models, shows great potential in creating intricate tissue structures that closely resemble native tissues. Preprocessing steps involve imaging the tissue using various modalities, designing the 3D model using CAD software, and considering the characteristics of the tissue for proper cell line selection. The development of suitable bioinks combining printability, cytocompatibility, and biofunctionality remains a challenge. Imaging techniques play a crucial role in characterizing tissue engineering products. Conventional tissue engineering strategies involve scaffolds, isolated cells, or a fusion of the cells with scaffolds, while 3D bioprinting enables the creation of complex tissue-like structures. These advancements have the potential to revolutionize a broad and developing sphere of tissue engineering, regenerative medicine, and biomedical research. Used methods are questionnaires and interviews help in collecting information from experienced specialists and obtaining their true opinions on the topic. As a result, bioprinting has a full potential to develop in the future.
I. Introduction
3D bioprinting is a modern ramification of a broad field of tissue engineering. 3D bioprinting is an additive manufacturing technique that employs bioinks to build structures layer by layer, mimicking the properties of natural tissues. These bioinks, used as the printing material, can be made from natural or synthetic materials mixed with living cells. Optimal bioink composition and density play a critical role in influencing both cell viability and density. Consequently, the careful selection of the most appropriate bioink is imperative for achieving specific research objectives. The suitability of bioprinters for specific bioinks can vary considerably. Therefore, it's crucial to make sure that the bioprinter and the chosen bioink are a good fit and work well together. 3D bioprinters are engineered to work with delicate materials containing living cells while minimizing damage to the final product. These bioprinters come in various types, including inkjet-based, laser-assisted, and extrusion-based systems. Due to the development of different diseases, there are people who need organ transplants. However, not everyone can afford it, or there are simply no donors. Therefore, it makes people question whether 3D bioprinting has a future in producing artificial body parts for people who suffer from illnesses and in improving medicine. For example, according to Z. Xia, bioprinting creates tissue constructs using heterogeneous compositions with different structures.II. Literature review
Biofabrication exhibits significant promise in the realms of regenerative medicine and the creation of sophisticated 3D in vitro models. It enables the production of intricate tissue structures that closely mimic native tissues to a greater extent compared to existing biomedical alternatives. Preceding the printing process, the first step of preprocessing is to image the tomogra-phy of the tissue of interest and gain an understanding of its basic anatomical properties. This is usually achieved using conventional 2D imaging methods such as MRI, CT, or ultrasound[2]. Other imaging modalities used to visualize the tissue of in-terest include positron emission tomography (PET), single-photon emission computed tomography, or mammographyIII. Methods
Different methods were chosen to be used in this research in order to obtain the most recent and reliable data on the topic regarding 3D bioprinting in tissue engineering. Both qualitative and quantitative types were chosen, so the problem can be observed more accurately. By using a qualitative approach, it is possible to gather various opinions of renowned scientists in this field or ordinary people. For instance, the opinions of biomedical engineers, tissue engineers, and medical workers can be considered well because they are most experienced in this subject. The collection of data in this approach can be done by conducting interviews with the sample. The interview was conducted with a healthcare field worker. Before it the consent letter was provided and a positive answer was obtained. The consent letter and questions are presented in Appendix 1 and 2 respectively. While the population is the general public aged 18- 80, the scope of the interviews is people in medical, tissue engineering, or related fields. By using a quantitative approach, it is possible to collect statistical data and analyze them from a mathematical point of view. It will provide us with exact data that is so crucial to the research. The collection of data can be done by using questionnaires since it requires less time to complete, are understandable, and are easier to gather statistics. Moreover, a consent letter was taken before the questionnaire, so the survey can be considered as ethical. The consent letter, and questions are provided in Appendix 3 and 4 respectively. The number of questions in a survey is 6. All of the listed methods above and their results will be in the Appendix part of the research paper.VI. Results
Responses to the questionnaire of people in the tissue engineering sphere
Healthcare worker's responses to the interview
Qualitative data obtained from a structured interview with a medical sphere worker is presented below in Table 1. Overall, the interview contained 5 questions about tissue engineering and 3D bioprinting. The answers were written on the paper and carefully analyzed using the table. First of all, the quotes containing the main idea were written, and then the main codes were obtained by generalization. Then, those codes were translated into themes that present the answer to the main question of the research. During the interview, it was obtained that the medical field had single research and experiments in the past. However, now this field is more advanced and will be developed in the future.Themes | Codes | Quotes |
---|---|---|
Bioprinting is an actual sphere, which has a great future. | Actual |
“actual on the modern level”, “has its future”,, “it has a great future”, “not only medicine, but also biology and microbiology”, “it depend on the level of scientific developments and relevance to the practice”, “later development of tissue engineering”, “witness all the developments and achievements” |
Development | ||
Future | ||
The field of bioprinting is more advanced now than in the past. | properties | “it had prerequisites in the times when I worked in medicine”, “single experiments and researches”, “single practices to implement it into sphere of medicine” |
single |
V. Discussion
The obtained results and materials are enough and significant to the research because the answer to the main research question can be derived using them. The results can be regarded as significant, because the chosen topic is important in the modern world, and results directly answer the main question. However, several challenges and limitations were faced. First of all, the number of participants is low. Since the scope of the participants of this study should be narrow, it was hard to find, select, and contact people. Therefore, there were only 5 participants in the questionnaire and 1 participant in the interview. Unfortunately, with this number of participants, it would be hard to generalize obtained data and conclusions to city, country, and global levels. The next limitation is the amount of time. To make research high quality and on the global scope takes a lot of time, but unfortunately in this timeline, it was hard to find people and analyze data. Taking the aforementioned limitations into consideration, the research's future recommendations can be proposed. Firstly, it is important to extend the number of survey and interview participants, including people from different countries, to make it possible to generalize findings on a global level. Secondly, new methods can be implemented, so the results can be more reliable.VI. Conclusion
This research is concentrated on the current aspect of 3D bioprinting in the field of tissue engineering. The field itself can be regarded as significant because it solves issues in biology, specifically in the medical field. Therefore, it was important to conduct such research on this topic. First of all, the question of research was "What is the future of bioprinting in tissue engineering?". In the end, our team clearly found an answer to this question after conducting primary and secondary research as well. There is definitely a future of bioprinting of artificial human body parts. The answer is formulated after a review of different sources about bioprinting, its function, advances, and principles of work. Also, the primary findings show that the majority of surveyed people in the healthcare or biology field have a positive opinion on 3D bioprinting, consider it ethical, strongly support this field, and believe that in the future there will be more artificial organs printed that are important to humans and wish to witness all the advances of this broad field of science.VII. Appendix
VIII. References