마이크로프로세서응용실험 4주차 Lab04 결과레포트 (A+자료)
본 내용은
"
서강대학교 23년도 마이크로프로세서응용실험 4주차 Lab04 결과레포트 (A+자료)
"
의 원문 자료에서 일부 인용된 것입니다.
2024.03.26
문서 내 토픽
  • 1. Number system / ASCII code
    8진수와 16진수는 2진수를 3bits, 4bits 단위로 묶어 표현하는 수체계이다. 이들은 모두 10진수로 상호변환 가능하다. 컴퓨터는 음수를 포함한 signed number를 표현하기 위해, 2's complement를 사용한다. ASCII code는 문자를 표현하기 위한 코드로, 0x00~0x1F, 0x7F의 경우 총 33개의 control character로, 나머지는 95개의 printerable character로 구성된다.
  • 2. Flags / Updating flags
    데이터 처리를 했을 때, 수행 결과의 상태를 flag에 저장하게 된다. PSR(Program Status Register)의 APSR(Application PSR)에 N, Z, C, V, Q 등의 flag가 저장된다. 이러한 flag는 이어지는 연산에 활용할 수 있다. 명령어에 S suffix가 붙으면 flag를 update하고, 그렇지 않으면 flag가 유지된다.
  • 3. Logical instructions
    논리명령어를 통해 AND, OR, Exclusive OR과 같은 동작이 수행될 수 있다. AND는 0과 AND 시 0으로 reset, 1과 AND 시 데이터 유지, OR는 0과 OR 시 데이터 유지, 1과 OR 시 1로 set, EOR은 0과 EOR 시 데이터 유지, 1과 EOR 시 toggle된다. TST와 TEQ는 flag만 update한다.
  • 4. shift / rotation instructions
    ASR은 부호를 고려한 shift right, LSL과 LSR은 부호를 고려하지 않는 shift, ROR은 rotate right를 수행한다. shift/rotate 동작은 MOV 명령어와 함께 사용되어 operand를 처리할 수 있다.
  • 5. arithmetic instructions
    ADD, SUB, ADC, SBC, RSB 등의 명령어를 통해 덧셈, 뺄셈을 수행할 수 있다. CMP와 CMN은 뺄셈과 덧셈을 수행하지만 결과를 저장하지 않고 flag만 update한다. CLZ는 operand의 상위 bit를 차지하는 0의 개수를 저장한다.
  • 6. Multiply / Division instructions
    MUL, MLA는 32-bit 곱셈을, UMULL, SMULL은 64-bit 곱셈을 수행한다. SDIV와 UDIV는 부호를 고려한 나눗셈과 부호를 고려하지 않은 나눗셈을 수행한다.
  • 7. Bitfield / Sign extension instruction
    BFC는 bitfield를 0으로 clear하고, BFI는 bitfield를 복사한다. SBFX는 bitfield를 sign-extension하고, UBFX는 zero-extension한다. SXTB, SXTH, UXTB, UXTH는 byte와 halfword를 sign/zero extension한다.
  • 8. Fixed-point arithmetic
    프로세서에서 floating point 연산을 지원하지 않는 경우, fixed-point 연산을 통해 실수 연산을 수행할 수 있다. Q format에 따라 정밀도가 달라지며, 곱셈 시 결과가 32-bit를 넘어가면 오버플로우가 발생할 수 있다.
  • 9. Floating-point arithmetic
    IEEE-754 표준에 따라 부호, 지수, 가수로 구성된 32-bit floating point 수를 프로세서에서 직접 연산할 수 있다. 부호, 지수, 가수를 추출하고 연산 후 다시 조합하는 과정을 통해 floating point 곱셈을 구현할 수 있다.
  • 10. Divide by zero
    SDIV와 UDIV 명령어에서 divisor가 0인 경우, 결과는 0x00000000이 되며 exception은 발생하지 않는다. 프로세서는 기계적으로 나눗셈을 수행하고 0을 반환한다.
Easy AI와 토픽 톺아보기
  • 1. Number system / ASCII code
    The number system and ASCII code are fundamental concepts in computer science and digital systems. Understanding the binary number system, how it represents data, and the ASCII code that maps characters to numeric values is crucial for working with digital information and programming. These topics provide the foundation for more advanced concepts like data representation, memory management, and low-level programming. Mastering the number system and ASCII code enables developers to better understand how computers process and store information, which is essential for designing efficient and reliable software and hardware systems.
  • 2. Flags / Updating flags
    Flags and flag updating are important concepts in computer architecture and low-level programming. Flags are used to store the results of various operations, such as carry, overflow, and zero, which are essential for conditional branching and control flow. Understanding how to properly update and use flags is crucial for writing efficient and correct assembly language or machine-level code. Flags allow programmers to make decisions based on the outcomes of previous operations, which is a fundamental aspect of computer programming. Mastering flags and flag updating is a valuable skill for anyone working with low-level systems programming or embedded systems.
  • 3. Logical instructions
    Logical instructions, such as AND, OR, XOR, and NOT, are fundamental operations in computer architecture and digital logic. These instructions are used to perform bitwise operations on data, which are essential for tasks like data manipulation, bit masking, and bit-level optimization. Understanding how to effectively use logical instructions is crucial for writing efficient and optimized code, particularly in low-level programming or systems programming. Mastering logical instructions can also help developers gain a deeper understanding of how computers represent and process information at the bit and byte level, which can be valuable for a wide range of applications, from embedded systems to high-performance computing.
  • 4. shift / rotation instructions
    Shift and rotation instructions are powerful tools in computer architecture and low-level programming. They allow for efficient bit manipulation, which is essential for tasks such as scaling, bit packing, and bit field extraction. Understanding how to effectively use shift and rotation instructions is crucial for writing optimized code, particularly in performance-critical applications or embedded systems. These instructions can be used to perform operations like logical shifts, arithmetic shifts, and circular rotations, which can significantly improve the efficiency and performance of certain algorithms. Mastering shift and rotation instructions can also provide valuable insights into how computers represent and process data at the bit level, which can be beneficial for a wide range of programming tasks.
  • 5. arithmetic instructions
    Arithmetic instructions, such as addition, subtraction, multiplication, and division, are fundamental operations in computer architecture and programming. These instructions form the backbone of many algorithms and calculations, and a deep understanding of how they work is essential for writing efficient and correct code. Mastering arithmetic instructions can help developers optimize their code for performance, understand the limitations and quirks of different number representations (e.g., integer, floating-point), and develop a stronger grasp of how computers process and manipulate data. This knowledge can be particularly valuable in domains like scientific computing, financial applications, and embedded systems, where the efficient use of arithmetic operations is crucial for achieving high performance and accuracy.
  • 6. Multiply / Division instructions
    Multiply and division instructions are essential for a wide range of computational tasks, from simple arithmetic to complex mathematical operations. Understanding how these instructions work, their performance characteristics, and their limitations is crucial for writing efficient and optimized code. Mastering multiply and division instructions can help developers choose the most appropriate operations for their specific use cases, optimize algorithms for speed and accuracy, and work with different number representations (e.g., integers, floating-point) more effectively. This knowledge can be particularly valuable in domains like signal processing, image and video processing, scientific computing, and financial applications, where the efficient use of multiplication and division is critical for achieving high performance and accurate results.
  • 7. Bitfield / Sign extension instruction
    Bitfield and sign extension instructions are powerful tools for working with data representation and manipulation in computer architecture and programming. Bitfields allow for efficient storage and access of data within a single word or register, while sign extension instructions are used to preserve the sign information when converting between data types of different sizes. Understanding how to effectively use these instructions is crucial for tasks like bit packing, data compression, and working with fixed-point or mixed-precision arithmetic. Mastering bitfield and sign extension instructions can help developers write more efficient and compact code, particularly in domains like embedded systems, digital signal processing, and low-level system programming, where optimizing data representation and memory usage is critical.
  • 8. Fixed-point arithmetic
    Fixed-point arithmetic is an important concept in computer architecture and programming, particularly in domains where precise numerical calculations are required, such as digital signal processing, control systems, and embedded systems. Understanding the principles of fixed-point representation, the trade-offs between precision and range, and the techniques for performing fixed-point operations efficiently is crucial for developing high-performance and accurate computational algorithms. Mastering fixed-point arithmetic can help developers optimize their code for specific hardware constraints, avoid numerical errors and overflow issues, and achieve better performance and energy efficiency compared to using floating-point arithmetic in certain applications. This knowledge can be particularly valuable for engineers and programmers working on real-time systems, low-power devices, and other resource-constrained environments.
  • 9. Floating-point arithmetic
    Floating-point arithmetic is a fundamental concept in computer science and engineering, as it provides a way to represent and perform calculations on a wide range of numerical values with varying degrees of precision. Understanding the IEEE 754 standard for floating-point representation, the trade-offs between precision and range, and the potential pitfalls of floating-point operations (such as rounding errors and underflow/overflow issues) is crucial for developing accurate and reliable computational algorithms. Mastering floating-point arithmetic can help developers choose the appropriate data types and computational methods for their specific applications, optimize performance, and ensure numerical stability in a wide range of domains, including scientific computing, financial modeling, graphics, and machine learning. This knowledge is essential for anyone working with numerical computations in modern software and hardware systems.
  • 10. Divide by zero
    Divide by zero is a fundamental concept in computer arithmetic that can have significant implications for the correctness and stability of computational algorithms. Understanding how computer systems handle division by zero, the different ways it can be detected and handled, and the potential consequences of unhandled divide-by-zero errors is crucial for writing robust and reliable software. Mastering the handling of divide-by-zero situations can help developers anticipate and mitigate potential issues, implement appropriate error-handling mechanisms, and ensure that their applications behave predictably and gracefully in the face of unexpected or invalid inputs. This knowledge is particularly important in domains like scientific computing, finance, and safety-critical systems, where the correct handling of numerical operations and errors is essential for maintaining the integrity and reliability of the overall system.