subject

Your task is to improve a multi-threaded program that frequently writes small entries to a shared file by introducing a layer that merges in memory before writing to file to reduce the number of expensive I/O operations. Imagine a program which writes to file small log entries very frequently. The program is multithreaded and each thread writes log entries to a common log file. To simplify implementation, each thread just writes log individually with blocking I/O interface. Even though threads write to the shared file concurrently without any locking, this is correct because the OS guarantees the atomicity of individual write() operation

ansver
Answers: 2

Another question on Computers and Technology

question
Computers and Technology, 23.06.2019 06:00
Which statistical function in a spreadsheet you to see how far each number varies, on average, from the average value of the list?
Answers: 2
question
Computers and Technology, 23.06.2019 15:20
In a game with three frames, where will the objects on layer 1 appear? a. next to the play area b. in the middle of the game c. behind everything else d. in front of everything else
Answers: 1
question
Computers and Technology, 23.06.2019 15:30
The processing of data in a computer involves the interplay between its various hardware components.
Answers: 1
question
Computers and Technology, 23.06.2019 17:30
When making changes to optimize part of a processor, it is often the case that speeding up one type of instruction comes at the cost of slowing down something else. for example, if we put in a complicated fast floating-point unit, that takes space, and something might have to be moved farther away from the middle to accommodate it, adding an extra cycle in delay to reach that unit. the basic amdahl's law equation does not take into account this trade-off. a. if the new fast floating-point unit speeds up floating-point operations by, on average, 2ă—, and floating-point operations take 20% of the original program's execution time, what is the overall speedup (ignoring the penalty to any other instructions)? b. now assume that speeding up the floating-point unit slowed down data cache accesses, resulting in a 1.5ă— slowdown (or 2/3 speedup). data cache accesses consume 10% of the execution time. what is the overall speedup now? c. after implementing the new floating-point operations, what percentage of execution time is spent on floating-point operations? what percentage is spent on data cache accesses?
Answers: 2
You know the right answer?
Your task is to improve a multi-threaded program that frequently writes small entries to a shared fi...
Questions