Researcher:
Akkaş, Ahmet

Loading...
Profile Picture
ORCID

Job Title

Faculty Member

First Name

Ahmet

Last Name

Akkaş

Name

Name Variants

Akkaş, Ahmet

Email Address

Birth Date

Search Results

Now showing 1 - 9 of 9
  • Placeholder
    Publication
    Dual-mode floating-point multiplier architectures with parallel operations
    (Elsevier, 2006) Schulte, Michael J.; Department of Computer Engineering; Akkaş, Ahmet; Faculty Member; Department of Computer Engineering; College of Engineering; N/A
    Although most modern processors have hardware support for double precision or double-extended precision floating-point multiplication, this support is inadequate for many scientific computations. This paper presents the architecture of a quadruple precision floating-point multiplier that also supports two parallel double precision multiplications. Since hardware support for quadruple precision arithmetic is expensive, a new technique is presented that requires much less hardware than a fully parallel quadruple precision multiplier. With this architecture, quadruple precision multiplication has a latency of three cycles and two parallel double precision multiplications have latencies of only two cycles. The multiplier is pipelined so that two double precision multiplications can begin every cycle or a quadruple precision multiplication can begin every other cycle. The technique used for the dual-mode quadruple precision multiplier is also applied to the design of a dual-mode double precision floating-point multiplier that performs a double precision multiplication or two single precision multiplications in parallel. Synthesis results show that the dual-mode double precision multiplier requires 43% less area than a conventional double precision multiplier. The correctness of all the multipliers presented in this paper is tested and verified through extensive simulation. (c) 2006 Elsevier B.V. All rights reserved.
  • Placeholder
    Publication
    A quadruple precision and dual double precision floating-point multiplier
    (Ieee Computer Soc, 2003) Schulte, Michael Joseph; Department of Computer Engineering; Akkaş, Ahmet; Faculty Member; Department of Computer Engineering; College of Engineering; N/A
    Double precision floating-point arithmetic is inadequate for many scientific computations. This paper presents the design of a quadruple precision floating-point multiplier that also supports two parallel double precision multiplications. Since hardware support for quadruple precision arithmetic is expensive, a new technique is presented that requires much less hardware than a fully parallel quadruple precision multiplier With this implementation, quadruple precision multiplication has a latency of three cycles and two parallel double precision multiplications have a latency of only two cycles. The design is pipelined so that two double precision multiplications can be started every cycle or a quadruple precision multiplication can be started every other cycle.
  • Placeholder
    Publication
    Intrinsic compiler support for interval arithmetic
    (Springer, 2004) Schulte, M.J.; Stine, J.E.; N/A; Akkaş, Ahmet; Faculty Member; College of Engineering; N/A
    Interval arithmetic provides an efficient method for monitoring errors in numerical computations and for solving problems that cannot be efficiently solved with floating-point arithmetic. To support interval arithmetic, several software tools have been developed including interval arithmetic libraries, extended scientific programming languages, and interval-enhanced compilers. The main disadvantage of these software tools is their speed, since interval operations are implemented using function calls. In this paper, compiler support for interval arithmetic is investigated. In particular, the performance benefits of having the compiler inline interval operations to eliminate function call overhead is researched. Interval operations are inlined with the GNU gcc compiler and the performance of interval arithmetic is evaluated on a superscalar architecture. To implement interval operations with compiler support, the compiler produces sequences of instructions that use existing floating point hardware. Simulation results show that the compiler implementation of interval arithmetic is approximately 4 to 5 times faster than a functionally equivalent interval arithmetic software implementation with function call overhead and approximately 1.2 to 1.5 times slower than a dedicated interval arithmetic hardware implementation.
  • Placeholder
    Publication
    A combined interval and floating-point reciprocal unit
    (IEEE, 2005) N/A; N/A; Department of Computer Engineering; Küçükkabak, Umut; Akkaş, Ahmet; Master Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A
    Interval arithmetic is one technique for accurate and reliable computing. Among interval arithmetic operations, division is the most time consuming operation. This paper presents the design and implementation of a combined interval and floating-point reciprocal unit. To compute the reciprocal of an operand, an initial approximation is computed first and then iterated twice by Newton-Raphson iteration. The combined interval and floating-point reciprocal unit computes the reciprocal of a double precision floating-point number in eleven clock cycles and the reciprocal of an interval in twenty-one clock cycles. The unit is implemented in VHDL and synthesized to estimate the area and the worst case delay. Simulation results showed that the least significant bit of the floating-point result cannot be guaranteed to be same for all cases compared to the result based on an infinite precision. For interval reciprocal, however, the true result is contained in the result interval.
  • Placeholder
    Publication
    A dual-mode quadruple precision floating-point divider
    (IEEE, 2006) N/A; N/A; N/A; İşseven, Aytunç; Akkaş, Ahmet; Master Student; Faculty Member; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A
    Many scientific applications require more accurate computations than double precision or double-extended precision floating-point arithmetic. This paper presents the design of a dual-mode quadruple precision floating-point divider that also supports two parallel double precision division. A radix- 4 SRT division algorithm with minimal redundancy is used to implement the dual-mode quadruple precision floating-point divider. To estimate area and worst case delay, a double, a quadruple, a dual-mode double, and a dual-mode quadruple precision floating-point division units are implemented in VHDL and synthesized. The synthesis results show that the dual-mode quadruple precision divider requires 22% more area than the quadruple precision divider and the worst case delay is 1% longer. A quadruple precision division takes fifty nine cycles and two parallel double precision division take twenty nine cycles.
  • Placeholder
    Publication
    Design and implementation of reciprocal unit using table look-up and newton-raphson iteration
    (IEEE Computer Soc, 2004) N/A; N/A; Department of Computer Engineering; Küçükkabak, Umut; Akkaş, Ahmet; Master Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A
    Combination of initial approximation through a table look-up and Newton-Raphson iteration is an effective way to compute reciprocal, which may replace the division operation. This paperpresents the design and implementation of reciprocal unit, which computes the reciprocal of double precision of floating-point number in eleven clock cycles. The presented design ultilizes a 2(10) x 20 bits ROM followed by two Newton-Raphson iterations. The design is implemented in VHDL and synthesized to estimate the area and the worst case delay Simulation results show that the least significand bit of the result cannot be guaranteed to be correct for all cases.
  • Placeholder
    Publication
    Dual-mode quadruple precision floating-point adder
    (IEEE Computer Soc, 2006) Department of Computer Engineering; Akkaş, Ahmet; Faculty Member; Department of Computer Engineering; College of Engineering; N/A
    Many scientific applications require more accurate computations than double precision or double-extended precision floating-point arithmetic. This paper presents a dual-mode quadruple precision floating-point adder that also supports two parallel double precision additions. A technique and modifications used to design the dual-mode quadruple precision adder are also applied to implement a dual-mode double precision adder which supports one double precision and two parallel single precision operations. To estimate area and worst case delay, the conventional and the dual-mode double and quadruple precision adders are implemented in VHDL and synthesized. The correctness of all the designs is also tested and verified through extensive simulation. Synthesis results show that the dual-mode quadruple precision adder requires roughly 14% more area than the conventional quadruple precision adder and a worst case delay is 9% longer.
  • Placeholder
    Publication
    A combined interval and floating-point comparator/selector
    (IEEE Computer Soc, 2002) Department of Computer Engineering; Akkaş, Ahmet; Faculty Member; Department of Computer Engineering; College of Engineering; N/A
    Interval arithmetic provides a robust method for automatically monitoring numerical errors and can be used to solve problems that cannot be efficiently solved with floating-point arithmetic. This paper presents the design and implementation of a combined interval and floating-point comparator/selector, which performs interval intersection, hull, mignitude, magnitude, minimum, maximum, and comparisons, As well as floating-point minimum, maximum and comparisons. area and delay estimates indicate that the combined interval and floating-point comparator/selector has 98% more area and a worst case delay that is 42% greater than a conventional floating point comparator/selector. the combined interval and floating-point comparator/selector greatly improves the performance of interval selection operations.
  • Placeholder
    Publication
    Reduced delay BCD adder
    (IEEE, 2007) N/A; Department of Computer Engineering; Bayrakçı, Alp Arslan; Akkaş, Ahmet; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A
    Financial and commercial applications use decimal data and spend most of their time in decimal arithmetic. Software implementation of decimal arithmetic is typically at least 100 times slower than binary arithmetic implemented in hardware. therefore, hardware support for decimal arithmetic is required. in this paper, A reduced delay binary coded decimal (BCD) adder is proposed. the proposed adder improves the delay of BCD addition by increasing parallelism. on the critical-path of the proposed BCD adder, there are two 4-bit binary adders, A carry network, one and gate, and one OR gate. To make area and delay comparison, the proposed adder and previously proposed five decimal adders are implemented in VHDL and synthesized using 0.18 micron TSMC aSIC library. Synthesis results obtained for 64-bit addition (16 decimal digits) show that the proposed BCD adder has the shortest delay (1.40 ns). Furthermore, it requires less area than previously proposed three decimal adders.