• DocumentCode
    1150000
  • Title

    Parallel architectures for processing high speed network signaling protocols

  • Author

    Ghosal, Dip ; Lakshman, T.V. ; Huang, Yemun

  • Author_Institution
    Bellcore, Red Bank, NJ, USA
  • Volume
    3
  • Issue
    6
  • fYear
    1995
  • fDate
    12/1/1995 12:00:00 AM
  • Firstpage
    716
  • Lastpage
    728
  • Abstract
    We study the effectiveness of different parallel architectures for achieving the high throughputs and low latencies needed in processing signaling protocols for high speed networks. A key performance issue is the trade off between the load balancing gains and the call record management overhead. Arranging processors in large groups potentially yields higher load balancing gains but also incurs higher overhead in maintaining consistency among the replicated copies of the call records. We study this tradeoff and its impact on the design of protocol processing systems for two generic classes of parallel architectures, namely, shared memory and distributed memory architectures. In shared memory architectures, maintaining a common message queue in the shared memory can provide the maximal load balancing gains. We show, however, in order to optimize performance it is necessary to organize the processors in small groups since large groups result in higher call record management overhead. In distributed memory architectures with each processor maintaining its own message queue there is no inherent provision for load balancing. Based on a detailed simulation analysis we show that organizing the processors into small groups and using a simple distributed load balancing scheme yields modest performance gains even after call record management overheads are taken into account. We find that the common message queue architecture outperforms the distributed architecture in terms of lower response time due to its improved load balancing capability. Finally, we do a fault-tolerance analysis with respect to the call-record data structure. Using a simple failure recovery model of the processors and the local memory, we show that in the case of shared memory architecture, the availability is also optimized when processors are organized in small groups. This is because when comparing architectures the higher call record management overhead incurred for larger group sizes must be accounted for as system unavailability
  • Keywords
    distributed memory systems; memory architecture; parallel architectures; shared memory systems; signal processing; telecommunication computing; telecommunication network reliability; telecommunication networks; transport protocols; availability; call record management overhead; call-record data structure; distributed load balancing; distributed memory architecture; failure recovery model; fault-tolerance analysis; high speed network signaling protocols; high throughputs; load balancing gains; local memory; low latencies; message queue; message queue architecture; parallel architectures; performance optimisation; protocol processing systems; response time; shared memory architecture; signaling protocols processing; simulation analysis; system unavailability; Delay; High-speed networks; Load management; Memory architecture; Parallel architectures; Performance gain; Process design; Protocols; Signal processing; Throughput;
  • fLanguage
    English
  • Journal_Title
    Networking, IEEE/ACM Transactions on
  • Publisher
    ieee
  • ISSN
    1063-6692
  • Type

    jour

  • DOI
    10.1109/90.477718
  • Filename
    477718