What does "Idle Wake Ups" indicate in the Mavericks activity monitor?

106

Solution 1

Mavericks performs some advanced timer coalescing to reduce power consumption. Apple claims up to a 72% reduction in CPU activity. I think (but am still searching for written proof) that Idle Wake Ups is the number of times the CPU leaves the idle-state per quanta of time. I'm not sure what that quanta is (probably one second).

You can read more about Maverick's power savings at Ars Technica's excellent review of OSX 10.9 (page 12, "Energy Savings").

Solution 2

According to Intel an Idle Wake Up is the

Number of times a thread caused the system to wake up from idleness to begin executing the thread.

Source: Idle Wake-ups (Intel.com)

Share:
106
Boom
Author by

Boom

Updated on September 18, 2022

Comments

  • Boom
    Boom almost 2 years
    • I'm have studied about Autoencoder and tried to implement a simple one.
    • I have built a model with one hidden layer.
    • I Run it with mnist digits dataset and plot the digits before the Autoencoder and after it.
    • I saw some examples which used hidden layer of size 32 or 64, I tried it and it didn't gave the same (or something close to) the source images.
    • I tried to change the hidden layer to size of 784 (same as the input size, just to test the model) but got same results.

    What am I missing ? Why the examples on the web shows good results and when I test it, I'm getting different results ?

    import tensorflow as tf
    from tensorflow.python.keras.layers import Input, Dense
    from tensorflow.python.keras.models import Model, Sequential
    from tensorflow.python.keras.datasets import mnist
    import numpy as np
    import matplotlib.pyplot as plt
    

    #   Build models
    hiden_size = 784 # After It didn't work for 32 , I have tried 784 which didn't improve results
    input_layer = Input(shape=(784,))
    decoder_input_layer = Input(shape=(hiden_size,))
    hidden_layer = Dense(hiden_size, activation="relu", name="hidden1")
    autoencoder_output_layer = Dense(784, activation="sigmoid", name="output")
    
    autoencoder = Sequential()
    autoencoder.add(input_layer)
    autoencoder.add(hidden_layer)
    autoencoder.add(autoencoder_output_layer)
    autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
    
    encoder = Sequential()
    encoder.add(input_layer)
    encoder.add(hidden_layer)
    
    decoder = Sequential()
    decoder.add(decoder_input_layer)
    decoder.add(autoencoder_output_layer)
    
    #
    #   Prepare Input
    (x_train, _), (x_test, _) = mnist.load_data()
    x_train = x_train.astype('float32') / 255.
    x_test = x_test.astype('float32') / 255.
    x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
    x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
    
    #
    # Fit & Predict
    autoencoder.fit(x_train, x_train,
                    epochs=50,
                    batch_size=256,
                    validation_data=(x_test, x_test),
                    verbose=1)
    
    encoded_imgs = encoder.predict(x_test)
    decoded_imgs = decoder.predict(encoded_imgs)
    
    #
    # Show results
    n = 10  # how many digits we will display
    plt.figure(figsize=(20, 4))
    for i in range(n):
        # display original
        ax = plt.subplot(2, n, i + 1)
        plt.imshow(x_test[i].reshape(28, 28))
        plt.gray()
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)
    
        # display reconstruction
        ax = plt.subplot(2, n, i + 1 + n)
        plt.imshow(decoded_imgs[i].reshape(28, 28))
        plt.gray()
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)
    plt.show()
    

    Results: enter image description here

  • ErikAGriffin
    ErikAGriffin almost 8 years
    You mention Mavericks specifically: Is this to say that Yosemite and beyond are less optimal in power saving than Mavericks?
  • Pixy
    Pixy over 7 years
    At the time he commented Mavericks was the latest. Later versions also had it and probably improved on it even.
  • jvriesem
    jvriesem over 3 years
    I really wish Activity Monitor would specify the quanta of time, e.g. "Idle Wake Ups/sec" or similar. I wonder if the quanta is based on the update frequency.
  • Boom
    Boom about 3 years
    It's the first time I see that different types of optimizer give very different results