Install applications in .wim offline

519

Ultimately a WIM file is just a type of disk image format. The tools available from Microsoft to manipulate a WIM offline basically amount to mounting the image, making changes to the files inside, and unmounting the image while committing the changes. There are some additional features that allow you to pre-install drivers and Microsoft hotfixes or service packs. But there's no real way to run a typical application installer against the mounted WIM file.

If you know exactly what the installer intended to to, you could hypothetically make the filesystem changes necessary and modify the registry using an offline registry editor. But it's probably more trouble than it's worth.

Most people use WIMs in conjunction with Windows deployment tools like MDT (Microsoft Deployment Toolkit) and SCCM as you previously mentioned. MDT is actually free, much lighter weight than SCCM, and supports the same sort of task sequences as SCCM.

Share:
519

Related videos on Youtube

Alexander Soare
Author by

Alexander Soare

Updated on September 18, 2022

Comments

  • Alexander Soare
    Alexander Soare over 1 year

    I've been following a tutorial on applying CNN to classify the MNIST handwritten numbers dataset.

    I'm just a bit confused on one point in K-fold Cross-validation. The author of this tutorial mentions in another tutorial that the model should be discarded each time the folds are swapped around. So to quote it:

    1. Shuffle the dataset randomly.
    2. Split the dataset into k groups
    3. For each unique group:
      • Take the group as a hold out or test data set
      • Take the remaining groups as a training data set
      • Fit a model on the training set and evaluate it on the test set
      • Retain the evaluation score and discard the model
    4. Summarize the skill of the model using the sample of model evaluation scores

    Although in the CNN tutorial this is how the author applies K-fold validation:

    def evaluate_model(model, dataX, dataY, n_folds=5):
        scores, histories = list(), list()
        # prepare cross validation
        kfold = KFold(n_folds, shuffle=True, random_state=1)
        # enumerate splits
        for train_ix, test_ix in kfold.split(dataX):
            # select rows for train and test
            trainX, trainY, testX, testY = dataX[train_ix], dataY[train_ix], dataX[test_ix], dataY[test_ix]
            # fit model
            history = model.fit(trainX, trainY, epochs=10, batch_size=32, validation_data=(testX, testY), verbose=0)
            # evaluate model
            _, acc = model.evaluate(testX, testY, verbose=0)
            print('> %.3f' % (acc * 100.0))
            # stores scores
            scores.append(acc)
            histories.append(history)
        return scores, histories
    

    So the model is not being re-initalized with each iteration of the for loop. And we can see this in the chart plotting the loss at the end of each epoch (blue curve). Notice how the the curves get closer and closer to the axis.

    Cross Entropy Loss for training folds (blue) and the validation folds (yellow)

    So shouldn't the author be re-initializing the model between loops? And if so, is there a right way to do that in Keras without making a new model from scratch?

    Bonus: The yellow line in the chart above is the loss of the validation fold. Why is its shape so different from that of the training loss?

  • Residualfail
    Residualfail almost 10 years
    Well that's unfortunate lol... I had seen MDT before but it appeared to be a component of SCCM so I mostly ignored it. Thank you for settings my expectations on the right course Ryan! I'll start rereading my MDT documentation now ;-)
  • Alexander Soare
    Alexander Soare over 4 years
    Thanks! So you're saying that what's shown in my chart is an indicator of overfitting?
  • Geeocode
    Geeocode over 4 years
    @AlexanderSoare Welcome, yes it can be but it depends on the size of the dataset as well. Because sometimes your validation data is not representative enough, if you have a small sample number.