[haiku-commits] haiku: hrev53442 - src/add-ons/kernel/file_systems/ramfs

  • From: waddlesplash <waddlesplash@xxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Sat, 31 Aug 2019 20:38:57 -0400 (EDT)

hrev53442 adds 9 changesets to branch 'master'
old head: 27823b29cd6f64a6180b76e4b297682c2bbcea9b
new head: 58a582ff22126e500b8fa44d822c5365d0f69da7
overview: 
https://git.haiku-os.org/haiku/log/?qt=range&q=58a582ff2212+%5E27823b29cd6f

----------------------------------------------------------------------------

6d244f23b866: ramfs: Use rw_lock instead of recursive_lock for r/w locking.
  
  Also use recursive_lock directly instead of the userlandfs shim class.

677fca26d718: ramfs: Remove the Attribute::GetKey overload that accepts a 
ptr-ptr.
  
  This only works because DataCollector stores its data in blocks
  which are mapped into the kernel's address space. After the
  next series of commits, it won't, so we can't depend on that.
  
  This required some changes to the indexes to keep a copy of the
  keys.

c8f0cc1afa77: ramfs: Fix bool/status mixup in DirectoryEntryTable.
  
  Now you can actually delete files again.

cbc07268196a: ramfs: Overhaul block allocation to use a VMCache and physical 
pages.
  
  This is a massive efficiency improvement as well as a large address
  space usage savings. It also paves the way for file_map() support...

d2ab19b331e2: ramfs: Drop now-unused Block* classes.

181d68fbd47b: ramfs: GCC 2 fixes.

69c34116f086: ram_disk: Add note about code duplication with ramfs.

a9be0efb2e38: kernel/fs: Add support for setting custom VMCaches in vnodes.
  
  This adds one (private) VFS function, and checks in all usages of
  the vnode->cache as a VMVnodeCache that it really is one. (Generic
  usages, for the moment just the ReleaseRef() calls in vnode
  destruction, are intentionally not touched.)
  
  This will be used by ramfs to set the cache from its own,
  so that map_file() calls on a ramfs can work.

58a582ff2212: ramfs: Set the vnode's cache object when opening files.
  
  Now it is possible to run applications, do Git checkouts, etc.
  on a ramfs (and those seem to work just fine -- a git checkout
  followed by a git fsck both succeeded.)

                              [ Augustin Cavalier <waddlesplash@xxxxxxxxx> ]

----------------------------------------------------------------------------

27 files changed, 395 insertions(+), 2513 deletions(-)
headers/private/kernel/vfs.h                     |   1 +
.../drivers/disk/virtual/ram_disk/ram_disk.cpp   |  10 +-
.../kernel/file_systems/ramfs/Attribute.cpp      |  22 +-
.../kernel/file_systems/ramfs/Attribute.h        |   1 -
.../file_systems/ramfs/AttributeIndexImpl.cpp    |  18 +-
src/add-ons/kernel/file_systems/ramfs/Block.h    | 319 ----------
.../kernel/file_systems/ramfs/BlockAllocator.cpp | 429 -------------
.../kernel/file_systems/ramfs/BlockAllocator.h   |  72 ---
.../file_systems/ramfs/BlockAllocatorArea.cpp    | 599 -------------------
.../file_systems/ramfs/BlockAllocatorArea.h      | 179 ------
.../ramfs/BlockAllocatorAreaBucket.cpp           |  53 --
.../ramfs/BlockAllocatorAreaBucket.h             | 101 ----
.../file_systems/ramfs/BlockAllocatorMisc.h      |  38 --
.../file_systems/ramfs/BlockReferenceManager.cpp | 133 ----
.../file_systems/ramfs/BlockReferenceManager.h   |  52 --
.../kernel/file_systems/ramfs/DataContainer.cpp  | 592 +++++++++---------
.../kernel/file_systems/ramfs/DataContainer.h    |  43 +-
.../file_systems/ramfs/DirectoryEntryTable.h     |   7 +-
src/add-ons/kernel/file_systems/ramfs/Jamfile    |   4 -
src/add-ons/kernel/file_systems/ramfs/Locking.h  |   3 +-
.../kernel/file_systems/ramfs/NodeTable.cpp      |   2 +-
src/add-ons/kernel/file_systems/ramfs/Query.cpp  |   3 +-
src/add-ons/kernel/file_systems/ramfs/Volume.cpp | 154 +----
src/add-ons/kernel/file_systems/ramfs/Volume.h   |  22 +-
.../file_systems/ramfs/kernel_interface.cpp      |  16 +-
src/system/kernel/cache/file_cache.cpp           |   6 +-
src/system/kernel/fs/vfs.cpp                     |  29 +-

############################################################################

Commit:      6d244f23b8667c0e4678b16aec1f4e55d44a2e51
URL:         https://git.haiku-os.org/haiku/commit/?id=6d244f23b866
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 17:00:05 2019 UTC

ramfs: Use rw_lock instead of recursive_lock for r/w locking.

Also use recursive_lock directly instead of the userlandfs shim class.

----------------------------------------------------------------------------

diff --git a/src/add-ons/kernel/file_systems/ramfs/Locking.h 
b/src/add-ons/kernel/file_systems/ramfs/Locking.h
index 9a43b42554..52a075bc5e 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Locking.h
+++ b/src/add-ons/kernel/file_systems/ramfs/Locking.h
@@ -2,11 +2,10 @@
  * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
  * All rights reserved. Distributed under the terms of the MIT license.
  */
-
 #ifndef LOCKING_H
 #define LOCKING_H
 
-#include "AutoLocker.h"
+#include <util/AutoLock.h>
 
 class Volume;
 
diff --git a/src/add-ons/kernel/file_systems/ramfs/Volume.cpp 
b/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
index 95c0795598..781ff7f950 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
@@ -139,9 +139,6 @@ Volume::Volume(fs_volume* volume)
        fIndexDirectory(NULL),
        fRootDirectory(NULL),
        fName(kDefaultVolumeName),
-       fLocker("volume"),
-       fIteratorLocker("iterators"),
-       fQueryLocker("queries"),
        fNodeListeners(NULL),
        fAnyNodeListeners(),
        fEntryListeners(NULL),
@@ -152,6 +149,9 @@ Volume::Volume(fs_volume* volume)
        fAccessTime(0),
        fMounted(false)
 {
+       rw_lock_init(&fLocker, "ramfs volume");
+       recursive_lock_init(&fIteratorLocker, "ramfs iterators");
+       recursive_lock_init(&fQueryLocker, "ramfs queries");
 }
 
 
@@ -159,6 +159,10 @@ Volume::Volume(fs_volume* volume)
 Volume::~Volume()
 {
        Unmount();
+
+       recursive_lock_destroy(&fIteratorLocker);
+       recursive_lock_destroy(&fQueryLocker);
+       rw_lock_destroy(&fLocker);
 }
 
 
@@ -168,14 +172,6 @@ Volume::Mount(uint32 flags)
 {
        Unmount();
 
-       // check the lockers
-       if (fLocker.InitCheck() < 0)
-               return fLocker.InitCheck();
-       if (fIteratorLocker.InitCheck() < 0)
-               return fIteratorLocker.InitCheck();
-       if (fQueryLocker.InitCheck() < 0)
-               return fQueryLocker.InitCheck();
-
        status_t error = B_OK;
        // create a block allocator
        if (error == B_OK) {
@@ -724,7 +720,7 @@ Volume::FindAttributeIndex(const char *name, uint32 type)
 void
 Volume::AddQuery(Query *query)
 {
-       AutoLocker<RecursiveLock> _(fQueryLocker);
+       RecursiveLocker _(fQueryLocker);
 
        if (query)
                fQueries.Insert(query);
@@ -734,7 +730,7 @@ Volume::AddQuery(Query *query)
 void
 Volume::RemoveQuery(Query *query)
 {
-       AutoLocker<RecursiveLock> _(fQueryLocker);
+       RecursiveLocker _(fQueryLocker);
 
        if (query)
                fQueries.Remove(query);
@@ -746,7 +742,7 @@ Volume::UpdateLiveQueries(Entry *entry, Node* node, const 
char *attribute,
        int32 type, const uint8 *oldKey, size_t oldLength, const uint8 *newKey,
        size_t newLength)
 {
-       AutoLocker<RecursiveLock> _(fQueryLocker);
+       RecursiveLocker _(fQueryLocker);
 
        for (Query* query = fQueries.First();
                 query;
@@ -826,53 +822,47 @@ Volume::GetAllocationInfo(AllocationInfo &info)
 bool
 Volume::ReadLock()
 {
-       bool alreadyLocked = fLocker.IsLocked();
-       if (fLocker.Lock()) {
-               if (!alreadyLocked)
-                       fAccessTime = system_time();
-               return true;
-       }
-       return false;
+       bool ok = rw_lock_read_lock(&fLocker) == B_OK;
+       if (ok && fLocker.owner_count > 1)
+               fAccessTime = system_time();
+       return ok;
 }
 
 // ReadUnlock
 void
 Volume::ReadUnlock()
 {
-       fLocker.Unlock();
+       rw_lock_read_unlock(&fLocker);
 }
 
 // WriteLock
 bool
 Volume::WriteLock()
 {
-       bool alreadyLocked = fLocker.IsLocked();
-       if (fLocker.Lock()) {
-               if (!alreadyLocked)
-                       fAccessTime = system_time();
-               return true;
-       }
-       return false;
+       bool ok = rw_lock_write_lock(&fLocker) == B_OK;
+       if (ok && fLocker.owner_count > 1)
+               fAccessTime = system_time();
+       return ok;
 }
 
 // WriteUnlock
 void
 Volume::WriteUnlock()
 {
-       fLocker.Unlock();
+       rw_lock_write_unlock(&fLocker);
 }
 
 // IteratorLock
 bool
 Volume::IteratorLock()
 {
-       return fIteratorLocker.Lock();
+       return recursive_lock_lock(&fIteratorLocker) == B_OK;
 }
 
 // IteratorUnlock
 void
 Volume::IteratorUnlock()
 {
-       fIteratorLocker.Unlock();
+       recursive_lock_unlock(&fIteratorLocker);
 }
 
diff --git a/src/add-ons/kernel/file_systems/ramfs/Volume.h 
b/src/add-ons/kernel/file_systems/ramfs/Volume.h
index 1cbec3c129..ea103123ba 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Volume.h
+++ b/src/add-ons/kernel/file_systems/ramfs/Volume.h
@@ -24,8 +24,8 @@
 
 #include <fs_interface.h>
 #include <SupportDefs.h>
+#include <lock.h>
 
-#include <userlandfs/shared/RecursiveLock.h>
 #include <util/DoublyLinkedList.h>
 
 #include "Entry.h"
@@ -186,9 +186,9 @@ private:
        IndexDirectory                  *fIndexDirectory;
        Directory                               *fRootDirectory;
        String                                  fName;
-       RecursiveLock                   fLocker;
-       RecursiveLock                   fIteratorLocker;
-       RecursiveLock                   fQueryLocker;
+       rw_lock                                 fLocker;
+       recursive_lock                  fIteratorLocker;
+       recursive_lock                  fQueryLocker;
        NodeListenerTree                *fNodeListeners;
        NodeListenerList                fAnyNodeListeners;
        EntryListenerTree               *fEntryListeners;

############################################################################

Commit:      677fca26d71818515c6b7b824ac99981fd27b7ed
URL:         https://git.haiku-os.org/haiku/commit/?id=677fca26d718
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 21:44:00 2019 UTC

ramfs: Remove the Attribute::GetKey overload that accepts a ptr-ptr.

This only works because DataCollector stores its data in blocks
which are mapped into the kernel's address space. After the
next series of commits, it won't, so we can't depend on that.

This required some changes to the indexes to keep a copy of the
keys.

----------------------------------------------------------------------------

diff --git a/src/add-ons/kernel/file_systems/ramfs/Attribute.cpp 
b/src/add-ons/kernel/file_systems/ramfs/Attribute.cpp
index 93e0de294c..2a1196c6b6 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Attribute.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/Attribute.cpp
@@ -84,9 +84,9 @@ Attribute::WriteAt(off_t offset, const void *buffer, size_t 
size,
                fIndex->Changed(this, oldKey, oldLength);
 
        // update live queries
-       const uint8* newKey;
+       uint8 newKey[kMaxIndexKeyLength];
        size_t newLength;
-       GetKey(&newKey, &newLength);
+       GetKey(newKey, &newLength);
        GetVolume()->UpdateLiveQueries(NULL, fNode, GetName(), fType, oldKey,
                oldLength, newKey, newLength);
 
@@ -105,26 +105,12 @@ Attribute::SetIndex(AttributeIndex *index, bool inIndex)
        fInIndex = inIndex;
 }
 
-// GetKey
-void
-Attribute::GetKey(const uint8 **key, size_t *length)
-{
-       if (key && length) {
-               GetFirstDataBlock(key, length);
-               *length = min(*length, kMaxIndexKeyLength);
-       }
-}
-
 // GetKey
 void
 Attribute::GetKey(uint8 *key, size_t *length)
 {
-       if (key && length) {
-               const uint8 *originalKey = NULL;
-               GetKey(&originalKey, length);
-               if (length > 0)
-                       memcpy(key, originalKey, *length);
-       }
+       *length = min(*length, kMaxIndexKeyLength);
+       ReadAt(0, key, *length, length);
 }
 
 // AttachAttributeIterator
diff --git a/src/add-ons/kernel/file_systems/ramfs/Attribute.h 
b/src/add-ons/kernel/file_systems/ramfs/Attribute.h
index b87440dd0b..630e55f39c 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Attribute.h
+++ b/src/add-ons/kernel/file_systems/ramfs/Attribute.h
@@ -42,7 +42,6 @@ public:
        void SetIndex(AttributeIndex *index, bool inIndex);
        AttributeIndex *GetIndex() const        { return fIndex; }
        bool IsInIndex() const                          { return fInIndex; }
-       void GetKey(const uint8 **key, size_t *length);
        void GetKey(uint8 *key, size_t *length);
 
        // iterator management
diff --git a/src/add-ons/kernel/file_systems/ramfs/AttributeIndexImpl.cpp 
b/src/add-ons/kernel/file_systems/ramfs/AttributeIndexImpl.cpp
index c85642f135..8a77a6baec 100644
--- a/src/add-ons/kernel/file_systems/ramfs/AttributeIndexImpl.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/AttributeIndexImpl.cpp
@@ -66,16 +66,18 @@ compare_keys(const uint8 *key1, size_t length1, const uint8 
*key2,
 // PrimaryKey
 class AttributeIndexImpl::PrimaryKey {
 public:
-       PrimaryKey(Attribute *attribute, const uint8 *key,
+       PrimaryKey(Attribute *attribute, const uint8 *theKey,
                           size_t length)
-               : attribute(attribute), key(key), length(length) {}
+               : attribute(attribute), length(length)
+                       { memcpy(key, theKey, length); }
        PrimaryKey(Attribute *attribute)
-               : attribute(attribute) { attribute->GetKey(&key, &length); }
-       PrimaryKey(const uint8 *key, size_t length)
-               : attribute(NULL), key(key), length(length) {}
+               : attribute(attribute) { attribute->GetKey(key, &length); }
+       PrimaryKey(const uint8 *theKey, size_t length)
+               : attribute(NULL), length(length)
+                       { memcpy(key, theKey, length); }
 
        Attribute       *attribute;
-       const uint8     *key;
+       uint8           key[kMaxIndexKeyLength];
        size_t          length;
 };
 
@@ -466,9 +468,9 @@ AttributeIndexImpl::Iterator::SetTo(AttributeIndexImpl 
*index,
                                if (!fEntry)
                                        BaseClass::GetNext();
                                if (Attribute **attribute = 
fIterator.fIterator.GetCurrent()) {
-                                       const uint8 *attrKey;
+                                       uint8 attrKey[kMaxIndexKeyLength];
                                        size_t attrKeyLength;
-                                       (*attribute)->GetKey(&attrKey, 
&attrKeyLength);
+                                       (*attribute)->GetKey(attrKey, 
&attrKeyLength);
                                        if (!ignoreValue
                                                && compare_keys(attrKey, 
attrKeyLength, key, length,
                                                                                
fIndex->GetType()) != 0) {
diff --git a/src/add-ons/kernel/file_systems/ramfs/Query.cpp 
b/src/add-ons/kernel/file_systems/ramfs/Query.cpp
index 9398f3bfa2..0b4e721904 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Query.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/Query.cpp
@@ -1009,9 +1009,10 @@ Equation::Match(Entry *entry, Node* node, const char 
*attributeName, int32 type,
        } else {
                // then for attributes
                Attribute *attribute = NULL;
+               buffer = (const uint8*)alloca(kMaxIndexKeyLength);
 
                if (node->FindAttribute(fAttribute, &attribute) == B_OK) {
-                       attribute->GetKey(&buffer, &size);
+                       attribute->GetKey((uint8*)buffer, &size);
                        type = attribute->GetType();
                } else
                        return MatchEmptyString();
diff --git a/src/add-ons/kernel/file_systems/ramfs/Volume.cpp 
b/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
index 781ff7f950..002b51cbb9 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
@@ -670,9 +670,9 @@ Volume::NodeAttributeRemoved(ino_t id, Attribute *attribute)
 
                // update live queries
                if (error == B_OK && attribute->GetNode()) {
-                       const uint8* oldKey;
+                       uint8 oldKey[kMaxIndexKeyLength];
                        size_t oldLength;
-                       attribute->GetKey(&oldKey, &oldLength);
+                       attribute->GetKey(oldKey, &oldLength);
                        UpdateLiveQueries(NULL, attribute->GetNode(), 
attribute->GetName(),
                                attribute->GetType(), oldKey, oldLength, NULL, 
0);
                }

############################################################################

Commit:      c8f0cc1afa779fe6dca96dc5c7f5acee46e2486d
URL:         https://git.haiku-os.org/haiku/commit/?id=c8f0cc1afa77
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 21:52:16 2019 UTC

ramfs: Fix bool/status mixup in DirectoryEntryTable.

Now you can actually delete files again.

----------------------------------------------------------------------------

diff --git a/src/add-ons/kernel/file_systems/ramfs/DirectoryEntryTable.h 
b/src/add-ons/kernel/file_systems/ramfs/DirectoryEntryTable.h
index 71ea63779e..c823a0aa89 100644
--- a/src/add-ons/kernel/file_systems/ramfs/DirectoryEntryTable.h
+++ b/src/add-ons/kernel/file_systems/ramfs/DirectoryEntryTable.h
@@ -138,8 +138,7 @@ DirectoryEntryTable::RemoveEntry(ino_t id, const char *name)
        Entry* child = fTable.Lookup(typename DirectoryEntryHash::Key(id, 
name));
        if (!child)
                return B_NAME_NOT_FOUND;
-       status_t error = fTable.Remove(child);
-       return error;
+       return fTable.Remove(child) ? B_OK : B_ERROR;
 }
 
 // GetEntry

############################################################################

Commit:      cbc07268196ac9f95ca44e7f693a842b56fd5410
URL:         https://git.haiku-os.org/haiku/commit/?id=cbc07268196a
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 22:44:27 2019 UTC

ramfs: Overhaul block allocation to use a VMCache and physical pages.

This is a massive efficiency improvement as well as a large address
space usage savings. It also paves the way for file_map() support...

----------------------------------------------------------------------------

diff --git a/src/add-ons/kernel/file_systems/ramfs/DataContainer.cpp 
b/src/add-ons/kernel/file_systems/ramfs/DataContainer.cpp
index ad025941c9..4779d2e6a8 100644
--- a/src/add-ons/kernel/file_systems/ramfs/DataContainer.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/DataContainer.cpp
@@ -1,420 +1,359 @@
 /*
  * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2019, Haiku, Inc.
  * All rights reserved. Distributed under the terms of the MIT license.
  */
+#include "DataContainer.h"
+
+#include <AutoDeleter.h>
+#include <util/AutoLock.h>
+
+#include <vm/VMCache.h>
+#include <vm/vm_page.h>
 
 #include "AllocationInfo.h"
-#include "Attribute.h" // for debugging only
-#include "Block.h"
-#include "DataContainer.h"
 #include "DebugSupport.h"
 #include "Misc.h"
-#include "Node.h"              // for debugging only
 #include "Volume.h"
 
 // constructor
 DataContainer::DataContainer(Volume *volume)
        : fVolume(volume),
-         fSize(0)
+         fSize(0),
+         fCache(NULL)
 {
 }
 
 // destructor
 DataContainer::~DataContainer()
 {
-       Resize(0);
+       if (fCache != NULL) {
+               fCache->Lock();
+               fCache->ReleaseRefAndUnlock();
+               fCache = NULL;
+       }
 }
 
 // InitCheck
 status_t
 DataContainer::InitCheck() const
 {
-       return (fVolume ? B_OK : B_ERROR);
+       return (fVolume != NULL ? B_OK : B_ERROR);
 }
 
 // Resize
 status_t
 DataContainer::Resize(off_t newSize)
 {
+//     PRINT("DataContainer::Resize(%Ld), fSize: %Ld\n", newSize, fSize);
+
        status_t error = B_OK;
-       if (newSize < 0)
-               newSize = 0;
-       if (newSize != fSize) {
-               // Shrinking should never fail. Growing can fail, if we run out 
of
-               // memory. Then we try to shrink back to the original size.
-               off_t oldSize = fSize;
-               error = _Resize(newSize);
-               if (error == B_NO_MEMORY && newSize > fSize)
-                       _Resize(oldSize);
+       if (newSize < fSize) {
+               // shrink
+               if (_IsCacheMode()) {
+                       // resize the VMCache, which will automatically free 
pages
+                       AutoLocker<VMCache> _(fCache);
+                       error = fCache->Resize(newSize, VM_PRIORITY_SYSTEM);
+                       if (error != B_OK)
+                               return error;
+               } else {
+                       // small buffer mode: just set the new size (done below)
+               }
+       } else if (newSize > fSize) {
+               // grow
+               if (_RequiresCacheMode(newSize)) {
+                       if (!_IsCacheMode())
+                               error = _SwitchToCacheMode(fSize);
+                       if (error != B_OK)
+                               return error;
+
+                       AutoLocker<VMCache> _(fCache);
+                       fCache->Resize(newSize, VM_PRIORITY_SYSTEM);
+
+                       // pages will be added as they are written to; so 
nothing else
+                       // needs to be done here.
+               } else {
+                       // no need to switch to cache mode: just set the new 
size
+                       // (done below)
+               }
        }
+
+       fSize = newSize;
+
+//     PRINT("DataContainer::Resize() done: %lx, fSize: %Ld\n", error, fSize);
        return error;
 }
 
 // ReadAt
 status_t
 DataContainer::ReadAt(off_t offset, void *_buffer, size_t size,
-                                         size_t *bytesRead)
+       size_t *bytesRead)
 {
        uint8 *buffer = (uint8*)_buffer;
-       status_t error = (buffer && offset >= 0 && bytesRead ? B_OK : 
B_BAD_VALUE);
-       if (error == B_OK) {
-               // read not more than we have to offer
-               offset = min(offset, fSize);
-               size = min(size, size_t(fSize - offset));
-               // iterate through the blocks, reading as long as there's 
something
-               // left to read
-               size_t blockSize = fVolume->GetBlockSize();
-               *bytesRead = 0;
-               while (size > 0) {
-                       size_t inBlockOffset = offset % blockSize;
-                       size_t toRead = min(size, size_t(blockSize - 
inBlockOffset));
-                       void *blockData = _GetBlockDataAt(offset / blockSize,
-                                                                               
          inBlockOffset, toRead);
-D(
-if (!blockData) {
-       Node *node = NULL;
-       if (Attribute *attribute = dynamic_cast<Attribute*>(this)) {
-               FATAL("attribute `%s' of\n", attribute->GetName());
-               node = attribute->GetNode();
-       } else {
-               node = dynamic_cast<Node*>(this);
-       }
-       if (node)
-//             FATAL(("node `%s'\n", node->GetName()));
-               FATAL("container size: %Ld, offset: %Ld, buffer size: %lu\n",
-                  fSize, offset, size);
-               return B_ERROR;
-}
-);
-                       memcpy(buffer, blockData, toRead);
-                       buffer += toRead;
-                       size -= toRead;
-                       offset += toRead;
-                       *bytesRead += toRead;
-               }
+       status_t error = (buffer && offset >= 0 &&
+               bytesRead ? B_OK : B_BAD_VALUE);
+       if (error != B_OK)
+               return error;
+
+       // read not more than we have to offer
+       offset = min(offset, fSize);
+       size = min(size, size_t(fSize - offset));
+
+       if (!_IsCacheMode()) {
+               // in non-cache mode, we just copy the data directly
+               memcpy(buffer, fSmallBuffer + offset, size);
+               if (bytesRead != NULL)
+                       *bytesRead = size;
+               return B_OK;
        }
+
+       // cache mode
+       error = _DoCacheIO(offset, buffer, size, bytesRead, false);
+
        return error;
 }
 
 // WriteAt
 status_t
 DataContainer::WriteAt(off_t offset, const void *_buffer, size_t size,
-                                          size_t *bytesWritten)
+       size_t *bytesWritten)
 {
-//PRINT(("DataContainer::WriteAt(%Ld, %p, %lu, %p), fSize: %Ld\n", offset, 
_buffer, size, bytesWritten, fSize));
+       PRINT("DataContainer::WriteAt(%Ld, %p, %lu, %p), fSize: %Ld\n", offset, 
_buffer, size, bytesWritten, fSize);
+
        const uint8 *buffer = (const uint8*)_buffer;
        status_t error = (buffer && offset >= 0 && bytesWritten
-                                         ? B_OK : B_BAD_VALUE);
+               ? B_OK : B_BAD_VALUE);
+       if (error != B_OK)
+               return error;
+
        // resize the container, if necessary
-       if (error == B_OK) {
-               off_t newSize = offset + size;
-               off_t oldSize = fSize;
-               if (newSize > fSize) {
-                       error = Resize(newSize);
-                       // pad with zero, if necessary
-                       if (error == B_OK && offset > oldSize)
-                               _ClearArea(offset, oldSize - offset);
-               }
-       }
-       if (error == B_OK) {
-               // iterate through the blocks, writing as long as there's 
something
-               // left to write
-               size_t blockSize = fVolume->GetBlockSize();
-               *bytesWritten = 0;
-               while (size > 0) {
-                       size_t inBlockOffset = offset % blockSize;
-                       size_t toWrite = min(size, size_t(blockSize - 
inBlockOffset));
-                       void *blockData = _GetBlockDataAt(offset / blockSize,
-                                                                               
          inBlockOffset, toWrite);
-D(if (!blockData) return B_ERROR;);
-                       memcpy(blockData, buffer, toWrite);
-                       buffer += toWrite;
-                       size -= toWrite;
-                       offset += toWrite;
-                       *bytesWritten += toWrite;
-               }
-       }
-//PRINT(("DataContainer::WriteAt() done: %lx, fSize: %Ld\n", error, fSize));
-       return error;
-}
+       if ((offset + size) > fSize)
+               error = Resize(offset + size);
+       if (error != B_OK)
+               return error;
 
-// GetFirstDataBlock
-void
-DataContainer::GetFirstDataBlock(const uint8 **data, size_t *length)
-{
-       if (data && length) {
-               if (_IsBlockMode()) {
-                       BlockReference *block = _GetBlockList()->ItemAt(0);
-                       *data = (const uint8*)block->GetData();
-                       *length = min(fSize, fVolume->GetBlockSize());
-               } else {
-                       *data = fSmallBuffer;
-                       *length = fSize;
-               }
+       if (!_IsCacheMode()) {
+               // in non-cache mode, we just copy the data directly
+               memcpy(fSmallBuffer + offset, buffer, size);
+               if (bytesWritten != NULL)
+                       *bytesWritten = size;
+               return B_OK;
        }
+
+       // cache mode
+       error = _DoCacheIO(offset, (uint8*)buffer, size, bytesWritten, true);
+
+       PRINT("DataContainer::WriteAt() done: %lx, fSize: %Ld\n", error, fSize);
+       return error;
 }
 
 // GetAllocationInfo
 void
 DataContainer::GetAllocationInfo(AllocationInfo &info)
 {
-       if (_IsBlockMode()) {
-               BlockList *blocks = _GetBlockList();
-               info.AddListAllocation(blocks->GetCapacity(), 
sizeof(BlockReference*));
-               int32 blockCount = blocks->CountItems();
-               for (int32 i = 0; i < blockCount; i++)
-                       
info.AddBlockAllocation(blocks->ItemAt(i)->GetBlock()->GetSize());
+       if (_IsCacheMode()) {
+               info.AddAreaAllocation(fCache->committed_size);
        } else {
                // ...
        }
 }
 
-// _RequiresBlockMode
-inline
-bool
-DataContainer::_RequiresBlockMode(size_t size)
-{
-       return (size > kSmallDataContainerSize);
-}
-
-// _IsBlockMode
-inline
-bool
-DataContainer::_IsBlockMode() const
-{
-       return (fSize > kSmallDataContainerSize);
-}
-
-// _Resize
-status_t
-DataContainer::_Resize(off_t newSize)
+// _RequiresCacheMode
+inline bool
+DataContainer::_RequiresCacheMode(size_t size)
 {
-//PRINT(("DataContainer::_Resize(%Ld), fSize: %Ld\n", newSize, fSize));
-       status_t error = B_OK;
-       if (newSize != fSize) {
-               size_t blockSize = fVolume->GetBlockSize();
-               int32 blockCount = _CountBlocks();
-               int32 newBlockCount = (newSize + blockSize - 1) / blockSize;
-               if (newBlockCount == blockCount) {
-                       // only the last block needs to be resized
-                       if (_IsBlockMode() && _RequiresBlockMode(newSize)) {
-                               // keep block mode
-                               error = _ResizeLastBlock((newSize - 1) % 
blockSize + 1);
-                       } else if (!_IsBlockMode() && 
!_RequiresBlockMode(newSize)) {
-                               // keep small buffer mode
-                               fSize = newSize;
-                       } else if (fSize < newSize) {
-                               // switch to block mode
-                               _SwitchToBlockMode(newSize);
-                       } else {
-                               // switch to small buffer mode
-                               _SwitchToSmallBufferMode(newSize);
-                       }
-               } else if (newBlockCount < blockCount) {
-                       // shrink
-                       if (_IsBlockMode()) {
-                               // remove the last blocks
-                               BlockList *blocks = _GetBlockList();
-                               for (int32 i = blockCount - 1; i >= 
newBlockCount; i--) {
-                                       BlockReference *block = 
blocks->ItemAt(i);
-                                       blocks->RemoveItem(i);
-                                       fVolume->FreeBlock(block);
-                                       fSize = (fSize - 1) / blockSize * 
blockSize;
-                               }
-                               // resize the last block to the correct size, 
respectively
-                               // switch to small buffer mode
-                               if (_RequiresBlockMode(newSize))
-                                       error = _ResizeLastBlock((newSize - 1) 
% blockSize + 1);
-                               else
-                                       _SwitchToSmallBufferMode(newSize);
-                       } else {
-                               // small buffer mode: just set the new size
-                               fSize = newSize;
-                       }
-               } else {
-                       // grow
-                       if (_RequiresBlockMode(newSize)) {
-                               // resize the first block to the correct size, 
respectively
-                               // switch to block mode
-                               if (_IsBlockMode())
-                                       error = _ResizeLastBlock(blockSize);
-                               else {
-                                       error = 
_SwitchToBlockMode(min((size_t)newSize,
-                                                                               
                   blockSize));
-                               }
-                               // add new blocks
-                               BlockList *blocks = _GetBlockList();
-                               while (error == B_OK && fSize < newSize) {
-                                       size_t newBlockSize = 
min(size_t(newSize - fSize),
-                                                                               
          blockSize);
-                                       BlockReference *block = NULL;
-                                       error = 
fVolume->AllocateBlock(newBlockSize, &block);
-                                       if (error == B_OK) {
-                                               if (blocks->AddItem(block))
-                                                       fSize += newBlockSize;
-                                               else {
-                                                       SET_ERROR(error, 
B_NO_MEMORY);
-                                                       
fVolume->FreeBlock(block);
-                                               }
-                                       }
-                               }
-                       } else {
-                               // no need to switch to block mode: just set 
the new size
-                               fSize = newSize;
-                       }
-               }
-       }
-//PRINT(("DataContainer::_Resize() done: %lx, fSize: %Ld\n", error, fSize));
-       return error;
+       // we cannot back out of cache mode after entering it,
+       // as there may be other consumers of our VMCache
+       return _IsCacheMode() || (size > kSmallDataContainerSize);
 }
 
-// _GetBlockList
-inline
-DataContainer::BlockList *
-DataContainer::_GetBlockList()
+// _IsCacheMode
+inline bool
+DataContainer::_IsCacheMode() const
 {
-       return (BlockList*)fBlocks;
-}
-
-// _GetBlockList
-inline
-const DataContainer::BlockList *
-DataContainer::_GetBlockList() const
-{
-       return (BlockList*)fBlocks;
+       return fCache != NULL;
 }
 
 // _CountBlocks
-inline
-int32
+inline int32
 DataContainer::_CountBlocks() const
 {
-       if (_IsBlockMode())
-               return _GetBlockList()->CountItems();
+       if (_IsCacheMode())
+               return fCache->page_count;
        else if (fSize == 0)    // small buffer mode, empty buffer
                return 0;
        return 1;       // small buffer mode, non-empty buffer
 }
 
-// _GetBlockDataAt
-inline
-void *
-DataContainer::_GetBlockDataAt(int32 index, size_t offset, size_t DARG(size))
+// _SwitchToCacheMode
+status_t
+DataContainer::_SwitchToCacheMode(size_t newBlockSize)
 {
-       if (_IsBlockMode()) {
-               BlockReference *block = _GetBlockList()->ItemAt(index);
-D(if (!fVolume->CheckBlock(block, offset + size)) return NULL;);
-               return block->GetDataAt(offset);
-       } else {
-D(
-if (offset + size > kSmallDataContainerSize) {
-       FATAL("DataContainer: Data access exceeds small buffer.\n");
-       PANIC("DataContainer: Data access exceeds small buffer.");
-       return NULL;
-}
-);
-               return fSmallBuffer + offset;
-       }
-}
+       status_t error = VMCacheFactory::CreateAnonymousCache(fCache, false, 0,
+               0, false, VM_PRIORITY_SYSTEM);
+       if (error != B_OK)
+               return error;
 
-// _ClearArea
-void
-DataContainer::_ClearArea(off_t offset, off_t size)
-{
-       // constrain the area to the data area
-       offset = min(offset, fSize);
-       size = min(size, fSize - offset);
-       // iterate through the blocks, clearing as long as there's something
-       // left to clear
-       size_t blockSize = fVolume->GetBlockSize();
-       while (size > 0) {
-               size_t inBlockOffset = offset % blockSize;
-               size_t toClear = min(size_t(size), blockSize - inBlockOffset);
-               void *blockData = _GetBlockDataAt(offset / blockSize, 
inBlockOffset,
-                                                                               
  toClear);
-D(if (!blockData) return;);
-               memset(blockData, 0, toClear);
-               size -= toClear;
-               offset += toClear;
-       }
+       fCache->temporary = 1;
+       fCache->virtual_end = newBlockSize;
+
+       error = fCache->Commit(newBlockSize, VM_PRIORITY_SYSTEM);
+       if (error != B_OK)
+               return error;
+
+       if (fSize != 0)
+               error = _DoCacheIO(0, fSmallBuffer, fSize, NULL, true);
+
+       return error;
 }
 
-// _ResizeLastBlock
+// _DoCacheIO
 status_t
-DataContainer::_ResizeLastBlock(size_t newSize)
+DataContainer::_DoCacheIO(const off_t offset, uint8* buffer, ssize_t length,
+       size_t* bytesProcessed, bool isWrite)
 {
-//PRINT(("DataContainer::_ResizeLastBlock(%lu), fSize: %Ld\n", newSize, 
fSize));
-       int32 blockCount = _CountBlocks();
-       status_t error = (fSize > 0 && blockCount > 0 && newSize > 0
-                                         ? B_OK : B_BAD_VALUE);
-D(
-if (!_IsBlockMode()) {
-       FATAL("Call of _ResizeLastBlock() in small buffer mode.\n");
-       PANIC("Call of _ResizeLastBlock() in small buffer mode.");
-       return B_ERROR;
-}
-);
-       if (error == B_OK) {
-               size_t blockSize = fVolume->GetBlockSize();
-               size_t oldSize = (fSize - 1) % blockSize + 1;
-               if (newSize != oldSize) {
-                       BlockList *blocks = _GetBlockList();
-                       BlockReference *block = blocks->ItemAt(blockCount - 1);
-                       BlockReference *newBlock = fVolume->ResizeBlock(block, 
newSize);
-                       if (newBlock) {
-                               if (newBlock != block)
-                                       blocks->ReplaceItem(blockCount - 1, 
newBlock);
-                               fSize += off_t(newSize) - oldSize;
-                       } else
-                               SET_ERROR(error, B_NO_MEMORY);
+       const size_t originalLength = length;
+       const bool user = IS_USER_ADDRESS(buffer);
+
+       const off_t rounded_offset = ROUNDDOWN(offset, B_PAGE_SIZE);
+       const size_t rounded_len = ROUNDUP((length) + (offset - rounded_offset),
+               B_PAGE_SIZE);
+       vm_page** pages = new(std::nothrow) vm_page*[rounded_len / B_PAGE_SIZE];
+       if (pages == NULL)
+               return B_NO_MEMORY;
+       ArrayDeleter<vm_page*> pagesDeleter(pages);
+
+       _GetPages(rounded_offset, rounded_len, isWrite, pages);
+
+       status_t error = B_OK;
+       size_t index = 0;
+
+       while (length > 0) {
+               vm_page* page = pages[index];
+               phys_addr_t at = (page->physical_page_number * B_PAGE_SIZE);
+               ssize_t bytes = B_PAGE_SIZE;
+               if (index == 0) {
+                       const uint32 pageoffset = (offset % B_PAGE_SIZE);
+                       at += pageoffset;
+                       bytes -= pageoffset;
                }
+               bytes = min(length, bytes);
+
+               if (isWrite) {
+                       page->modified = true;
+                       error = vm_memcpy_to_physical(at, buffer, bytes, user);
+               } else {
+                       if (page != NULL) {
+                               error = vm_memcpy_from_physical(buffer, at, 
bytes, user);
+                       } else {
+                               if (user) {
+                                       error = user_memset(buffer, 0, bytes);
+                               } else {
+                                       memset(buffer, 0, bytes);
+                               }
+                       }
+               }
+               if (error != B_OK)
+                       break;
+
+               buffer += bytes;
+               length -= bytes;
+               index++;
        }
-//PRINT(("DataContainer::_ResizeLastBlock() done: %lx, fSize: %Ld\n", error, 
fSize));
+
+       _PutPages(rounded_offset, rounded_len, pages, error == B_OK);
+
+       if (bytesProcessed != NULL)
+               *bytesProcessed = length > 0 ? originalLength - length : 
originalLength;
+
        return error;
 }
 
-// _SwitchToBlockMode
-status_t
-DataContainer::_SwitchToBlockMode(size_t newBlockSize)
+// _GetPages
+void
+DataContainer::_GetPages(off_t offset, off_t length, bool isWrite,
+       vm_page** pages)
 {
-       // allocate a new block
-       BlockReference *block = NULL;
-       status_t error = fVolume->AllocateBlock(newBlockSize, &block);
-       if (error == B_OK) {
-               // copy the data from the small buffer into the block
-               if (fSize > 0)
-                       memcpy(block->GetData(), fSmallBuffer, fSize);
-               // construct the block list and add the block
-               new (fBlocks) BlockList(10);
-               BlockList *blocks = _GetBlockList();
-               if (blocks->AddItem(block)) {
-                       fSize = newBlockSize;
-               } else {
-                       // error: destroy the block list and free the block
-                       SET_ERROR(error, B_NO_MEMORY);
-                       blocks->~BlockList();
-                       if (fSize > 0)
-                               memcpy(fSmallBuffer, block->GetData(), fSize);
-                       fVolume->FreeBlock(block);
+       // TODO: This method is duplicated in the ram_disk. Perhaps it
+       // should be put into a common location?
+
+       // get the pages, we already have
+       AutoLocker<VMCache> locker(fCache);
+
+       size_t pageCount = length / B_PAGE_SIZE;
+       size_t index = 0;
+       size_t missingPages = 0;
+
+       while (length > 0) {
+               vm_page* page = fCache->LookupPage(offset);
+               if (page != NULL) {
+                       if (page->busy) {
+                               fCache->WaitForPageEvents(page, 
PAGE_EVENT_NOT_BUSY, true);
+                               continue;
+                       }
+
+                       DEBUG_PAGE_ACCESS_START(page);
+                       page->busy = true;
+               } else
+                       missingPages++;
+
+               pages[index++] = page;
+               offset += B_PAGE_SIZE;
+               length -= B_PAGE_SIZE;
+       }
+
+       locker.Unlock();
+
+       // For a write we need to reserve the missing pages.
+       if (isWrite && missingPages > 0) {
+               vm_page_reservation reservation;
+               vm_page_reserve_pages(&reservation, missingPages,
+                       VM_PRIORITY_SYSTEM);
+
+               for (size_t i = 0; i < pageCount; i++) {
+                       if (pages[i] != NULL)
+                               continue;
+
+                       pages[i] = vm_page_allocate_page(&reservation,
+                               PAGE_STATE_WIRED | VM_PAGE_ALLOC_BUSY);
+
+                       if (--missingPages == 0)
+                               break;
                }
+
+               vm_page_unreserve_pages(&reservation);
        }
-       return error;
 }
 
-// _SwitchToSmallBufferMode
 void
-DataContainer::_SwitchToSmallBufferMode(size_t newSize)
+DataContainer::_PutPages(off_t offset, off_t length, vm_page** pages,
+       bool success)
 {
-       // remove the first (and only) block
-       BlockList *blocks = _GetBlockList();
-       BlockReference *block = blocks->ItemAt(0);
-       blocks->RemoveItem((int32)0L);
-       // destroy the block list and copy the data into the small buffer
-       blocks->~BlockList();
-       if (newSize > 0)
-               memcpy(fSmallBuffer, block->GetData(), newSize);
-       // free the block and set the new size
-       fVolume->FreeBlock(block);
-       fSize = newSize;
-}
+       // TODO: This method is duplicated in the ram_disk. Perhaps it
+       // should be put into a common location?
+
+       AutoLocker<VMCache> locker(fCache);
 
+       // Mark all pages unbusy. On error free the newly allocated pages.
+       size_t index = 0;
+
+       while (length > 0) {
+               vm_page* page = pages[index++];
+               if (page != NULL) {
+                       if (page->CacheRef() == NULL) {
+                               if (success) {
+                                       fCache->InsertPage(page, offset);
+                                       fCache->MarkPageUnbusy(page);
+                                       DEBUG_PAGE_ACCESS_END(page);
+                               } else
+                                       vm_page_free(NULL, page);
+                       } else {
+                               fCache->MarkPageUnbusy(page);
+                               DEBUG_PAGE_ACCESS_END(page);
+                       }
+               }
+
+               offset += B_PAGE_SIZE;
+               length -= B_PAGE_SIZE;
+       }
+}
diff --git a/src/add-ons/kernel/file_systems/ramfs/DataContainer.h 
b/src/add-ons/kernel/file_systems/ramfs/DataContainer.h
index d51e640ae2..7c2fe5a45d 100644
--- a/src/add-ons/kernel/file_systems/ramfs/DataContainer.h
+++ b/src/add-ons/kernel/file_systems/ramfs/DataContainer.h
@@ -1,14 +1,16 @@
 /*
  * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
+ * Copyright 2019, Haiku, Inc.
  * All rights reserved. Distributed under the terms of the MIT license.
  */
 #ifndef DATA_CONTAINER_H
 #define DATA_CONTAINER_H
 
-#include "List.h"
+#include <OS.h>
 
+struct vm_page;
+class VMCache;
 class AllocationInfo;
-class BlockReference;
 class Volume;
 
 // Size of the DataContainer's small buffer. If it contains data up to this
@@ -49,38 +51,25 @@ public:
        virtual status_t WriteAt(off_t offset, const void *buffer, size_t size,
                                                         size_t *bytesWritten);
 
-       void GetFirstDataBlock(const uint8 **data, size_t *length);
-
        // debugging
        void GetAllocationInfo(AllocationInfo &info);
 
 private:
-       typedef List<BlockReference*>   BlockList;
-
-private:
-       static inline bool _RequiresBlockMode(size_t size);
-       inline bool _IsBlockMode() const;
+       inline bool _RequiresCacheMode(size_t size);
+       inline bool _IsCacheMode() const;
+       status_t _SwitchToCacheMode(size_t newBlockSize);
+       void _GetPages(off_t offset, off_t length, bool isWrite, vm_page** 
pages);
+       void _PutPages(off_t offset, off_t length, vm_page** pages, bool 
success);
+       status_t _DoCacheIO(const off_t offset, uint8* buffer, ssize_t length,
+               size_t* bytesProcessed, bool isWrite);
 
-       inline BlockList *_GetBlockList();
-       inline const BlockList *_GetBlockList() const;
        inline int32 _CountBlocks() const;
-       inline void *_GetBlockDataAt(int32 index, size_t offset, size_t size);
-
-       void _ClearArea(off_t offset, off_t size);
-
-       status_t _Resize(off_t newSize);
-       status_t _ResizeLastBlock(size_t newSize);
-
-       status_t _SwitchToBlockMode(size_t newBlockSize);
-       void _SwitchToSmallBufferMode(size_t newSize);
 
 private:
-       Volume                                  *fVolume;
-       off_t                                   fSize;
-       union {
-               uint8                           fBlocks[sizeof(BlockList)];
-               uint8                           
fSmallBuffer[kSmallDataContainerSize];
-       };
+       Volume                          *fVolume;
+       off_t                           fSize;
+       VMCache*                        fCache;
+       uint8                           fSmallBuffer[kSmallDataContainerSize];
 };
 
 #endif // DATA_CONTAINER_H
diff --git a/src/add-ons/kernel/file_systems/ramfs/Jamfile 
b/src/add-ons/kernel/file_systems/ramfs/Jamfile
index c5920f53c8..446c9ba17a 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Jamfile
+++ b/src/add-ons/kernel/file_systems/ramfs/Jamfile
@@ -12,10 +12,6 @@ KernelAddon ramfs
        AttributeIndex.cpp
        AttributeIndexImpl.cpp
        AttributeIterator.cpp
-       BlockAllocator.cpp
-       BlockAllocatorArea.cpp
-       BlockAllocatorAreaBucket.cpp
-       BlockReferenceManager.cpp
        DataContainer.cpp
        DebugSupport.cpp
        Directory.cpp
diff --git a/src/add-ons/kernel/file_systems/ramfs/Volume.cpp 
b/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
index 002b51cbb9..1f46f543dd 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/Volume.cpp
@@ -26,8 +26,8 @@
 #include <string.h>
 #include <unistd.h>
 
-#include "Block.h"
-#include "BlockAllocator.h"
+#include <vm/vm_page.h>
+
 #include "DebugSupport.h"
 #include "Directory.h"
 #include "DirectoryEntryTable.h"
@@ -43,11 +43,6 @@
 #include "TwoKeyAVLTree.h"
 #include "Volume.h"
 
-// default block size
-static const off_t kDefaultBlockSize = 4096;
-
-static const size_t kDefaultAreaSize = kDefaultBlockSize * 128;
-
 // default volume name
 static const char *kDefaultVolumeName = "RAM FS";
 
@@ -143,9 +138,6 @@ Volume::Volume(fs_volume* volume)
        fAnyNodeListeners(),
        fEntryListeners(NULL),
        fAnyEntryListeners(),
-       fBlockAllocator(NULL),
-       fBlockSize(kDefaultBlockSize),
-       fAllocatedBlocks(0),
        fAccessTime(0),
        fMounted(false)
 {
@@ -173,14 +165,6 @@ Volume::Mount(uint32 flags)
        Unmount();
 
        status_t error = B_OK;
-       // create a block allocator
-       if (error == B_OK) {
-               fBlockAllocator = new(nothrow) BlockAllocator(kDefaultAreaSize);
-               if (fBlockAllocator)
-                       error = fBlockAllocator->InitCheck();
-               else
-                       SET_ERROR(error, B_NO_MEMORY);
-       }
        // create the listener trees
        if (error == B_OK) {
                fNodeListeners = new(nothrow) NodeListenerTree;
@@ -267,41 +251,22 @@ Volume::Unmount()
                delete fNodeTable;
                fNodeTable = NULL;
        }
-       // delete the block allocator
-       if (fBlockAllocator) {
-               delete fBlockAllocator;
-               fBlockAllocator = NULL;
-       }
        return B_OK;
 }
 
-// GetBlockSize
-off_t
-Volume::GetBlockSize() const
-{
-       return fBlockSize;
-}
-
 // CountBlocks
 off_t
 Volume::CountBlocks() const
 {
-       size_t bytes = 0;
-       system_info sysInfo;
-       if (get_system_info(&sysInfo) == B_OK) {
-               int32 freePages = sysInfo.max_pages - sysInfo.used_pages;
-               bytes = (uint32)freePages * B_PAGE_SIZE
-                               + fBlockAllocator->GetAvailableBytes();
-       }
-       return bytes / kDefaultBlockSize;
+       // TODO: Compute how many pages we are using across all DataContainers.
+       return 0;
 }
 
 // CountFreeBlocks
 off_t
 Volume::CountFreeBlocks() const
 {
-       // TODO:...
-       return CountBlocks() - fBlockAllocator->GetUsedBytes() / 
kDefaultBlockSize;
+       return vm_page_num_available_pages();
 }
 
 // SetName
@@ -752,54 +717,6 @@ Volume::UpdateLiveQueries(Entry *entry, Node* node, const 
char *attribute,
        }
 }
 
-// AllocateBlock
-status_t
-Volume::AllocateBlock(size_t size, BlockReference **block)
-{
-       status_t error = (size > 0 && size <= fBlockSize && block
-                                         ? B_OK : B_BAD_VALUE);
-       if (error == B_OK) {
-               *block = fBlockAllocator->AllocateBlock(size);
-               if (*block)
-                       fAllocatedBlocks++;
-               else
-                       SET_ERROR(error, B_NO_MEMORY);
-       }
-       return error;
-}
-
-// FreeBlock
-void
-Volume::FreeBlock(BlockReference *block)
-{
-       if (block) {
-               fBlockAllocator->FreeBlock(block);
-               fAllocatedBlocks--;
-       }
-}
-
-// ResizeBlock
-BlockReference *
-Volume::ResizeBlock(BlockReference *block, size_t size)
-{
-       BlockReference *newBlock = NULL;
-       if (size <= fBlockSize && block) {
-               if (size == 0) {
-                       fBlockAllocator->FreeBlock(block);
-                       fAllocatedBlocks--;
-               } else
-                       newBlock = fBlockAllocator->ResizeBlock(block, size);
-       }
-       return newBlock;
-}
-
-// CheckBlock
-bool
-Volume::CheckBlock(BlockReference *block, size_t size)
-{
-       return fBlockAllocator->CheckBlock(block, size);
-}
-
 // GetAllocationInfo
 void
 Volume::GetAllocationInfo(AllocationInfo &info)
@@ -813,9 +730,6 @@ Volume::GetAllocationInfo(AllocationInfo &info)
        fRootDirectory->GetAllocationInfo(info);
        // name
        info.AddStringAllocation(fName.GetLength());
-       // block allocator
-       info.AddOtherAllocation(sizeof(BlockAllocator));
-       fBlockAllocator->GetAllocationInfo(info);
 }
 
 // ReadLock
diff --git a/src/add-ons/kernel/file_systems/ramfs/Volume.h 
b/src/add-ons/kernel/file_systems/ramfs/Volume.h
index ea103123ba..f3425918b6 100644
--- a/src/add-ons/kernel/file_systems/ramfs/Volume.h
+++ b/src/add-ons/kernel/file_systems/ramfs/Volume.h
@@ -34,9 +34,8 @@
 #include "String.h"
 
 class AllocationInfo;
-class Block;
-class BlockAllocator;
-class BlockReference;
+class Attribute;
+class AttributeIndex;
 class Directory;
 class DirectoryEntryTable;
 class Entry;
@@ -101,7 +100,6 @@ public:
        dev_t GetID() const { return fVolume != NULL ? fVolume->id : -1; }
        fs_volume* FSVolume() const { return fVolume; }
 
-       off_t GetBlockSize() const;
        off_t CountBlocks() const;
        off_t CountFreeBlocks() const;
 
@@ -156,11 +154,6 @@ public:
 
        ino_t NextNodeID() { return fNextNodeID++; }
 
-       status_t AllocateBlock(size_t size, BlockReference **block);
-       void FreeBlock(BlockReference *block);
-       BlockReference *ResizeBlock(BlockReference *block, size_t size);
-       // debugging only
-       bool CheckBlock(BlockReference *block, size_t size = 0);
        void GetAllocationInfo(AllocationInfo &info);
 
        bigtime_t GetAccessTime() const { return fAccessTime; }
@@ -194,9 +187,6 @@ private:
        EntryListenerTree               *fEntryListeners;
        EntryListenerList               fAnyEntryListeners;
        QueryList                               fQueries;
-       BlockAllocator                  *fBlockAllocator;
-       off_t                                   fBlockSize;
-       off_t                                   fAllocatedBlocks;
        bigtime_t                               fAccessTime;
        bool                                    fMounted;
 };
diff --git a/src/add-ons/kernel/file_systems/ramfs/kernel_interface.cpp 
b/src/add-ons/kernel/file_systems/ramfs/kernel_interface.cpp
index 6f8d803689..42d0bff234 100644
--- a/src/add-ons/kernel/file_systems/ramfs/kernel_interface.cpp
+++ b/src/add-ons/kernel/file_systems/ramfs/kernel_interface.cpp
@@ -137,7 +137,7 @@ ramfs_read_fs_info(fs_volume* _volume, struct fs_info *info)
        if (VolumeReadLocker locker = volume) {
                info->flags = B_FS_IS_PERSISTENT | B_FS_HAS_ATTR | B_FS_HAS_MIME
                        | B_FS_HAS_QUERY;
-               info->block_size = volume->GetBlockSize();
+               info->block_size = B_PAGE_SIZE;
                info->io_size = kOptimalIOSize;
                info->total_blocks = volume->CountBlocks();
                info->free_blocks = volume->CountFreeBlocks();

############################################################################

Commit:      d2ab19b331e2dc802705d59383c6ed7553438384
URL:         https://git.haiku-os.org/haiku/commit/?id=d2ab19b331e2
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 22:47:49 2019 UTC

ramfs: Drop now-unused Block* classes.

----------------------------------------------------------------------------

diff --git a/src/add-ons/kernel/file_systems/ramfs/Block.h 
b/src/add-ons/kernel/file_systems/ramfs/Block.h
deleted file mode 100644
index 1a5bc7f404..0000000000
--- a/src/add-ons/kernel/file_systems/ramfs/Block.h
+++ /dev/null
@@ -1,319 +0,0 @@
-/*
- * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
- * All rights reserved. Distributed under the terms of the MIT license.
- */
-#ifndef BLOCK_H
-#define BLOCK_H
-
-class Block;
-class BlockHeader;
-class BlockReference;
-class TFreeBlock;
-
-#include <SupportDefs.h>
-
-// debugging
-//#define inline
-#define BA_DEFINE_INLINES      1
-
-// BlockHeader
-class BlockHeader {
-public:
-       inline Block *ToBlock()                         { return (Block*)this; }
-       inline TFreeBlock *ToFreeBlock()        { return (TFreeBlock*)this; }
-
-       inline void SetPreviousBlock(Block *block);
-       inline Block *GetPreviousBlock();
-
-       inline void SetNextBlock(Block *block);
-       inline Block *GetNextBlock();
-       inline bool HasNextBlock()                      { return (fSize & 
HAS_NEXT_FLAG); }
-
-       inline void SetSize(size_t size, bool hasNext = false);
-       inline size_t GetSize() const;
-       static inline size_t GetUsableSizeFor(size_t size);
-       inline size_t GetUsableSize() const;
-
-       inline void *GetData();
-
-       inline void SetFree(bool flag);
-       inline bool IsFree() const;
-
-       inline void SetReference(BlockReference *ref);
-       inline BlockReference *GetReference() const             { return 
fReference; }
-       inline void FixReference();
-
-       inline void SetTo(Block *previous, size_t size, bool isFree, bool 
hasNext,
-                                         BlockReference *reference = NULL);
-
-private:
-       enum {
-               FREE_FLAG               = 0x80000000,
-               BACK_SKIP_MASK  = 0x7fffffff,
-       };
-
-       enum {
-               HAS_NEXT_FLAG   = 0x80000000,
-               SIZE_MASK               = 0x7fffffff,
-       };
-
-protected:
-       BlockHeader();
-       ~BlockHeader();
-
-protected:
-       size_t                  fBackSkip;
-       size_t                  fSize;
-       BlockReference  *fReference;
-};
-
-// Block
-class Block : public BlockHeader {
-public:
-       static inline Block *MakeBlock(void *address, ssize_t offset,
-                                                                 Block 
*previous, size_t size, bool isFree,
-                                                                 bool hasNext,
-                                                                 
BlockReference *reference = NULL);
-
-protected:
-       Block();
-       ~Block();
-};
-
-// TFreeBlock
-class TFreeBlock : public Block {
-public:
-
-       inline void SetPreviousFreeBlock(TFreeBlock *block)     { fPrevious = 
block; }
-       inline void SetNextFreeBlock(TFreeBlock *block)         { fNext = 
block; }
-       inline TFreeBlock *GetPreviousFreeBlock()                       { 
return fPrevious; }
-       inline TFreeBlock *GetNextFreeBlock()                           { 
return fNext; }
-
-       inline void SetTo(Block *previous, size_t size, bool hasNext,
-                                         TFreeBlock *previousFree, TFreeBlock 
*nextFree);
-
-//     static inline TFreeBlock *MakeFreeBlock(void *address, ssize_t offset,
-//             Block *previous, size_t size, bool hasNext, TFreeBlock 
*previousFree,
-//             TFreeBlock *nextFree);
-
-protected:
-       TFreeBlock();
-       ~TFreeBlock();
-
-private:
-       TFreeBlock      *fPrevious;
-       TFreeBlock      *fNext;
-};
-
-// BlockReference
-class BlockReference {
-public:
-       inline BlockReference()                         : fBlock(NULL) {}
-       inline BlockReference(Block *block) : fBlock(block) {}
-
-       inline void SetBlock(Block *block)      { fBlock = block; }
-       inline Block *GetBlock() const          { return fBlock; }
-
-       inline void *GetData() const            { return fBlock->GetData(); }
-       inline void *GetDataAt(ssize_t offset) const;
-
-private:
-       Block   *fBlock;
-};
-
-
-// ---------------------------------------------------------------------------
-// inline methods
-
-// debugging
-#if BA_DEFINE_INLINES
-
-// BlockHeader
-
-// SetPreviousBlock
-inline
-void
-BlockHeader::SetPreviousBlock(Block *block)
-{
-       size_t offset = (block ? (char*)this - (char*)block : 0);
-       fBackSkip = (fBackSkip & FREE_FLAG) | offset;
-}
-
-// GetPreviousBlock
-inline
-Block *
-BlockHeader::GetPreviousBlock()
-{
-       if (fBackSkip & BACK_SKIP_MASK)
-               return (Block*)((char*)this - (fBackSkip & BACK_SKIP_MASK));
-       return NULL;
-}
-
-// SetNextBlock
-inline
-void
-BlockHeader::SetNextBlock(Block *block)
-{
-       if (block)
-               fSize = ((char*)block - (char*)this) | HAS_NEXT_FLAG;
-       else
-               fSize &= SIZE_MASK;
-}
-
-// GetNextBlock
-inline
-Block *
-BlockHeader::GetNextBlock()
-{
-       if (fSize & HAS_NEXT_FLAG)
-               return (Block*)((char*)this + (SIZE_MASK & fSize));
-       return NULL;
-}
-
-// SetSize
-inline
-void
-BlockHeader::SetSize(size_t size, bool hasNext)
-{
-       fSize = size;
-       if (hasNext)
-               fSize |= HAS_NEXT_FLAG;
-}
-
-// GetSize
-inline
-size_t
-BlockHeader::GetSize() const
-{
-       return (fSize & SIZE_MASK);
-}
-
-// GetUsableSizeFor
-inline
-size_t
-BlockHeader::GetUsableSizeFor(size_t size)
-{
-       return (size - sizeof(BlockHeader));
-}
-
-// GetUsableSize
-inline
-size_t
-BlockHeader::GetUsableSize() const
-{
-       return GetUsableSizeFor(GetSize());
-}
-
-// GetData
-inline
-void *
-BlockHeader::GetData()
-{
-       return (char*)this + sizeof(BlockHeader);
-}
-
-// SetFree
-inline
-void
-BlockHeader::SetFree(bool flag)
-{
-       if (flag)
-               fBackSkip |= FREE_FLAG;
-       else
-               fBackSkip &= ~FREE_FLAG;
-}
-
-// IsFree
-inline
-bool
-BlockHeader::IsFree() const
-{
-       return (fBackSkip & FREE_FLAG);
-}
-
-// SetTo
-inline
-void
-BlockHeader::SetTo(Block *previous, size_t size, bool isFree, bool hasNext,
-                                  BlockReference *reference)
-{
-       SetPreviousBlock(previous);
-       SetSize(size, hasNext);
-       SetFree(isFree);
-       SetReference(reference);
-}
-
-// SetReference
-inline
-void
-BlockHeader::SetReference(BlockReference *ref)
-{
-       fReference = ref;
-       FixReference();
-}
-
-// FixReference
-inline
-void
-BlockHeader::FixReference()
-{
-       if (fReference)
-               fReference->SetBlock(ToBlock());
-}
-
-
-// Block
-
-// MakeBlock
-/*inline
-Block *
-Block::MakeBlock(void *address, ssize_t offset, Block *previous, size_t size,
-                                bool isFree, bool hasNext, BlockReference 
*reference)
-{
-       Block *block = (Block*)((char*)address + offset);
-       block->SetTo(previous, size, isFree, hasNext, reference);
-       return block;
-}*/
-
-
-// TFreeBlock
-
-// SetTo
-inline
-void
-TFreeBlock::SetTo(Block *previous, size_t size, bool hasNext,
-                                 TFreeBlock *previousFree, TFreeBlock 
*nextFree)
-{
-       Block::SetTo(previous, size, true, hasNext, NULL);
-       SetPreviousFreeBlock(previousFree);
-       SetNextFreeBlock(nextFree);
-}
-
-// MakeFreeBlock
-/*inline
-TFreeBlock *
-TFreeBlock::MakeFreeBlock(void *address, ssize_t offset, Block *previous,
-                                                 size_t size, bool hasNext, 
TFreeBlock *previousFree,
-                                                 TFreeBlock *nextFree)
-{
-       TFreeBlock *block = (TFreeBlock*)((char*)address + offset);
-       block->SetTo(previous, size, hasNext, previousFree, nextFree);
-       if (hasNext)
-               block->GetNextBlock()->SetPreviousBlock(block);
-       return block;
-}*/
-
-
-// BlockReference
-
-// GetDataAt
-inline
-void *
-BlockReference::GetDataAt(ssize_t offset) const
-{
-       return (char*)fBlock->GetData() + offset;
-}
-
-#endif // BA_DEFINE_INLINES
-
-#endif // BLOCK_H
diff --git a/src/add-ons/kernel/file_systems/ramfs/BlockAllocator.cpp 
b/src/add-ons/kernel/file_systems/ramfs/BlockAllocator.cpp
deleted file mode 100644
index 6e4c001b89..0000000000
--- a/src/add-ons/kernel/file_systems/ramfs/BlockAllocator.cpp
+++ /dev/null
@@ -1,429 +0,0 @@
-/*
- * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
- * All rights reserved. Distributed under the terms of the MIT license.
- */
-
-// debugging
-#define BA_DEFINE_INLINES      1
-
-#include "AllocationInfo.h"
-#include "BlockAllocator.h"
-#include "BlockAllocatorArea.h"
-#include "BlockAllocatorAreaBucket.h"
-#include "DebugSupport.h"
-
-
-// BlockAllocator
-
-// constructor
-BlockAllocator::BlockAllocator(size_t areaSize)
-       : fReferenceManager(),
-         fBuckets(NULL),
-         fBucketCount(0),
-         fAreaSize(areaSize),
-         fAreaCount(0),
-         fFreeBytes(0)
-{
-       // create and init buckets
-       fBucketCount = bucket_containing_size(areaSize) + 1;
-       fBuckets = new(nothrow) AreaBucket[fBucketCount];
-       size_t minSize = 0;
-       for (int32 i = 0; i < fBucketCount; i++) {
-               size_t maxSize = (1 << i) * kMinNetBlockSize;
-               fBuckets[i].SetIndex(i);
-               fBuckets[i].SetSizeLimits(minSize, maxSize);
-               minSize = maxSize;
-       }
-}
-
-// destructor
-BlockAllocator::~BlockAllocator()
-{
-       if (fBuckets)
-               delete[] fBuckets;
-}
-
-// InitCheck
-status_t
-BlockAllocator::InitCheck() const
-{
-       RETURN_ERROR(fBuckets ? B_OK : B_NO_MEMORY);
-}
-
-// AllocateBlock
-BlockReference *
-BlockAllocator::AllocateBlock(size_t usableSize)
-{
-#if ENABLE_BA_PANIC
-if (fPanic)
-       return NULL;
-#endif
-//PRINT(("BlockAllocator::AllocateBlock(%lu)\n", usableSize));
-       Block *block = NULL;
-       if (usableSize > 0 && usableSize <= 
Area::GetMaxFreeBytesFor(fAreaSize)) {
-               // get a block reference
-               BlockReference *reference = 
fReferenceManager.AllocateReference();
-               if (reference) {
-                       block = _AllocateBlock(usableSize);
-                       // set reference / cleanup on failure
-                       if (block)
-                               block->SetReference(reference);
-                       else
-                               fReferenceManager.FreeReference(reference);
-               }
-               D(SanityCheck(false));
-       }
-//PRINT(("BlockAllocator::AllocateBlock() done: %p\n", block));
-       return (block ? block->GetReference() : NULL);
-}
-
-// FreeBlock
-void
-BlockAllocator::FreeBlock(BlockReference *blockReference)
-{
-#if ENABLE_BA_PANIC
-if (fPanic)
-       return;
-#endif
-D(if (!CheckBlock(blockReference)) return;);
-       Block *block = (blockReference ? blockReference->GetBlock() : NULL);
-//PRINT(("BlockAllocator::FreeBlock(%p)\n", block));
-       Area *area = NULL;
-       if (block && !block->IsFree() && (area = _AreaForBlock(block)) != NULL) 
{
-               _FreeBlock(area, block, true);
-               D(SanityCheck(false));
-               if (_DefragmentingRecommended())
-                       _Defragment();
-       }
-//PRINT(("BlockAllocator::FreeBlock() done\n"));
-}
-
-// ResizeBlock
-BlockReference *
-BlockAllocator::ResizeBlock(BlockReference *blockReference, size_t usableSize)
-{
-#if ENABLE_BA_PANIC
-if (fPanic)
-       return NULL;
-#endif
-D(if (!CheckBlock(blockReference)) return NULL;);
-//PRINT(("BlockAllocator::ResizeBlock(%p, %lu)\n", blockReference, 
usableSize));
-       Block *block = (blockReference ? blockReference->GetBlock() : NULL);
-       Block *resultBlock = NULL;
-       Area *area = NULL;
-       if (block && !block->IsFree() && (area = _AreaForBlock(block)) != NULL) 
{
-//PRINT(("BlockAllocator::ResizeBlock(%p, %lu)\n", block, usableSize));
-               if (usableSize) {
-                       // try to let the area resize the block
-                       size_t blockSize = block->GetSize();
-                       size_t areaFreeBytes = area->GetFreeBytes();
-                       bool needsDefragmenting = area->NeedsDefragmenting();
-//PRINT(("  block reference: %p / %p\n", blockReference, 
block->GetReference()));
-                       resultBlock = area->ResizeBlock(block, usableSize);
-                       block = blockReference->GetBlock();
-                       if (resultBlock) {
-//PRINT(("  area succeeded in resizing the block\n"));
-//PRINT(("  block reference now: %p\n", resultBlock->GetReference()));
-                               // the area was able to resize the block
-                               _RethinkAreaBucket(area, area->GetBucket(),
-                                                                  
needsDefragmenting);
-                               fFreeBytes = fFreeBytes + area->GetFreeBytes() 
- areaFreeBytes;
-                               // Defragment only, if the area was able to 
resize the block,
-                               // the new block is smaller than the old one 
and defragmenting
-                               // is recommended.
-                               if (blockSize > resultBlock->GetSize()
-                                       && _DefragmentingRecommended()) {
-                                       _Defragment();
-                               }
-                       } else {
-//PRINT(("  area failed to resize the block\n"));
-                               // the area failed: allocate a new block, copy 
the data, and
-                               // free the old one
-                               resultBlock = _AllocateBlock(usableSize);
-                               block = blockReference->GetBlock();
-                               if (resultBlock) {
-                                       memcpy(resultBlock->GetData(), 
block->GetData(),
-                                                  block->GetUsableSize());
-                                       
resultBlock->SetReference(block->GetReference());
-                                       _FreeBlock(area, block, false);
-                               }
-                       }
-               } else
-                       FreeBlock(blockReference);
-               D(SanityCheck(false));
-//PRINT(("BlockAllocator::ResizeBlock() done: %p\n", resultBlock));
-       }
-       return (resultBlock ? resultBlock->GetReference() : NULL);
-}
-
-// SanityCheck
-bool
-BlockAllocator::SanityCheck(bool deep) const
-{
-       // iterate through all areas of all buckets
-       int32 areaCount = 0;
-       size_t freeBytes = 0;
-       for (int32 i = 0; i < fBucketCount; i++) {
-               AreaBucket *bucket = fBuckets + i;
-               if (deep) {
-                       if (!bucket->SanityCheck(deep))
-                               return false;
-               }
-               for (Area *area = bucket->GetFirstArea();
-                        area;
-                        area = bucket->GetNextArea(area)) {
-                       areaCount++;
-                       freeBytes += area->GetFreeBytes();
-               }
-       }
-       // area count
-       if (areaCount != fAreaCount) {
-               FATAL("fAreaCount is %" B_PRId32 ", but should be %" B_PRId32 
"\n", fAreaCount,
-                          areaCount);
-               BA_PANIC("BlockAllocator: Bad free bytes.");
-               return false;
-       }
-       // free bytes
-       if (fFreeBytes != freeBytes) {
-               FATAL("fFreeBytes is %lu, but should be %lu\n", fFreeBytes,
-                          freeBytes);
-               BA_PANIC("BlockAllocator: Bad free bytes.");
-               return false;
-       }
-       return true;
-}
-
-// CheckArea
-bool
-BlockAllocator::CheckArea(Area *checkArea)
-{
-       for (int32 i = 0; i < fBucketCount; i++) {
-               AreaBucket *bucket = fBuckets + i;
-               for (Area *area = bucket->GetFirstArea();
-                        area;
-                        area = bucket->GetNextArea(area)) {
-                       if (area == checkArea)
-                               return true;
-               }
-       }
-       FATAL("Area %p is not a valid Area!\n", checkArea);
-       BA_PANIC("Invalid Area.");
-       return false;
-}
-
-// CheckBlock
-bool
-BlockAllocator::CheckBlock(Block *block, size_t minSize)
-{
-       Area *area = _AreaForBlock(block);
-       return (area/* && CheckArea(area)*/ && area->CheckBlock(block, 
minSize));
-}
-
-// CheckBlock
-bool
-BlockAllocator::CheckBlock(BlockReference *reference, size_t minSize)
-{
-       return (fReferenceManager.CheckReference(reference)
-                       && CheckBlock(reference->GetBlock(), minSize));
-}
-
-// GetAllocationInfo
-void
-BlockAllocator::GetAllocationInfo(AllocationInfo &info)
-{
-       fReferenceManager.GetAllocationInfo(info);
-       info.AddOtherAllocation(sizeof(AreaBucket), fBucketCount);
-       info.AddAreaAllocation(fAreaSize, fAreaCount);
-}
-
-// _AreaForBlock
-inline
-BlockAllocator::Area *
-BlockAllocator::_AreaForBlock(Block *block)
-{
-       Area *area = NULL;
-       area_id id = area_for(block);
-       area_info info;
-       if (id >= 0 && get_area_info(id, &info) == B_OK)
-               area = (Area*)info.address;
-D(if (!CheckArea(area)) return NULL;);
-       return area;
-}
-
-// _AllocateBlock
-Block *
-BlockAllocator::_AllocateBlock(size_t usableSize, bool dontCreateArea)
-{
-       Block *block = NULL;
-       // Get the last area (the one with the most free space) and try
-       // to let it allocate a block. If that fails, allocate a new area.
-       // find a bucket for the allocation
-// TODO: optimize
-       AreaBucket *bucket = NULL;
-       int32 index = bucket_containing_min_size(usableSize);
-       for (; index < fBucketCount; index++) {
-               if (!fBuckets[index].IsEmpty()) {
-                       bucket = fBuckets + index;
-                       break;
-               }
-       }
-       // get an area: if we have one, from the bucket, else create a new
-       // area
-       Area *area = NULL;
-       if (bucket)
-               area = bucket->GetFirstArea();
-       else if (!dontCreateArea) {
-               area = Area::Create(fAreaSize);
-               if (area) {
-                       fAreaCount++;
-                       fFreeBytes += area->GetFreeBytes();
-                       bucket = fBuckets + area->GetBucketIndex();
-                       bucket->AddArea(area);
-PRINT("New area allocated. area count now: %" B_PRId32 ", free bytes: %lu\n",
-fAreaCount, fFreeBytes);
-               }
-       }
-       // allocate a block
-       if (area) {
-               size_t areaFreeBytes = area->GetFreeBytes();
-               bool needsDefragmenting = area->NeedsDefragmenting();
-               block = area->AllocateBlock(usableSize);
-               // move the area into another bucket, if necessary
-               if (block) {
-                       _RethinkAreaBucket(area, bucket, needsDefragmenting);
-                       fFreeBytes = fFreeBytes + area->GetFreeBytes() - 
areaFreeBytes;
-               }
-#if ENABLE_BA_PANIC
-else if (!fPanic) {
-FATAL("Block allocation failed unexpectedly.\n");
-PRINT("  usableSize: %lu, areaFreeBytes: %lu\n", usableSize, areaFreeBytes);
-BA_PANIC("Block allocation failed unexpectedly.");
-//block = area->AllocateBlock(usableSize);
-}
-#endif
-       }
-       return block;
-}
-
-// _FreeBlock
-void
-BlockAllocator::_FreeBlock(Area *area, Block *block, bool freeReference)
-{
-       size_t areaFreeBytes = area->GetFreeBytes();
-       AreaBucket *bucket = area->GetBucket();
-       bool needsDefragmenting = area->NeedsDefragmenting();
-       // free the block and the block reference
-       BlockReference *reference = block->GetReference();
-       area->FreeBlock(block);
-       if (reference && freeReference)
-               fReferenceManager.FreeReference(reference);
-       // move the area into another bucket, if necessary
-       _RethinkAreaBucket(area, bucket, needsDefragmenting);
-       fFreeBytes = fFreeBytes + area->GetFreeBytes() - areaFreeBytes;
-}
-
-// _RethinkAreaBucket
-inline
-void
-BlockAllocator::_RethinkAreaBucket(Area *area, AreaBucket *bucket,
-                                                                  bool 
needsDefragmenting)
-{
-       AreaBucket *newBucket = fBuckets + area->GetBucketIndex();
-       if (newBucket != bucket
-               || needsDefragmenting != area->NeedsDefragmenting()) {
-               bucket->RemoveArea(area);
-               newBucket->AddArea(area);
-       }
-}
-
-// _DefragmentingRecommended
-inline
-bool
-BlockAllocator::_DefragmentingRecommended()
-{
-       // Don't know, whether this makes much sense: We don't try to 
defragment,
-       // when not at least a complete area could be deleted, and some 
tolerance
-       // being left (a fixed value plus 1/32 of the used bytes).
-       size_t usedBytes = fAreaCount * Area::GetMaxFreeBytesFor(fAreaSize)
-                                          - fFreeBytes;
-       return (fFreeBytes > fAreaSize + kDefragmentingTolerance + usedBytes / 
32);
-}
-
-// _Defragment
-bool
-BlockAllocator::_Defragment()
-{
-       bool success = false;
-       // We try to empty the least populated area by moving its blocks to 
other
-       // areas.
-       if (fFreeBytes > fAreaSize) {
-               // find the least populated area
-               // find the bucket with the least populated areas
-               AreaBucket *bucket = NULL;
-               for (int32 i = fBucketCount - 1; i >= 0; i--) {
-                       if (!fBuckets[i].IsEmpty()) {
-                               bucket = fBuckets + i;
-                               break;
-                       }
-               }
-               // find the area in the bucket
-               Area *area = NULL;
-               if (bucket) {
-                       area = bucket->GetFirstArea();
-                       Area *bucketArea = area;
-                       while ((bucketArea = bucket->GetNextArea(bucketArea)) 
!= NULL) {
-                               if (bucketArea->GetFreeBytes() > 
area->GetFreeBytes())
-                                       area = bucketArea;
-                       }
-               }
-               if (area) {
-                       // remove the area from the bucket
-                       bucket->RemoveArea(area);
-                       fFreeBytes -= area->GetFreeBytes();
-                       // iterate through the blocks in the area and try to 
find a new
-                       // home for them
-                       success = true;
-                       while (Block *block = area->GetFirstUsedBlock()) {
-                               Block *newBlock = 
_AllocateBlock(block->GetUsableSize(), true);
-                               if (newBlock) {
-                                       // got a new block: copy the data to it 
and free the old
-                                       // one
-                                       memcpy(newBlock->GetData(), 
block->GetData(),
-                                                  block->GetUsableSize());
-                                       
newBlock->SetReference(block->GetReference());
-                                       block->SetReference(NULL);
-                                       area->FreeBlock(block, true);
-#if ENABLE_BA_PANIC
-                                       if (fPanic) {
-                                               PRINT("Panicked while trying to 
free block %p\n",
-                                                          block);
-                                               success = false;
-                                               break;
-                                       }
-#endif
-                               } else {
-                                       success = false;
-                                       break;
-                               }
-                       }
-                       // delete the area
-                       if (success && area->IsEmpty()) {
-                               area->Delete();
-                               fAreaCount--;
-PRINT("defragmenting: area deleted\n");
-                       } else {
-PRINT("defragmenting: failed to empty area\n");
-                               // failed: re-add the area
-                               fFreeBytes += area->GetFreeBytes();
-                               AreaBucket *newBucket = fBuckets + 
area->GetBucketIndex();
-                               newBucket->AddArea(area);
-                       }
-               }
-               D(SanityCheck(false));
-       }
-       return success;
-}
-
-#if ENABLE_BA_PANIC
-bool BlockAllocator::fPanic = false;
-#endif
diff --git a/src/add-ons/kernel/file_systems/ramfs/BlockAllocator.h 
b/src/add-ons/kernel/file_systems/ramfs/BlockAllocator.h
deleted file mode 100644
index 87254f0f98..0000000000
--- a/src/add-ons/kernel/file_systems/ramfs/BlockAllocator.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
- * All rights reserved. Distributed under the terms of the MIT license.
- */
-#ifndef BLOCK_ALLOCATOR_H
-#define BLOCK_ALLOCATOR_H
-
-#include <OS.h>
-
-#include "Block.h"
-#include "BlockReferenceManager.h"
-#include "DebugSupport.h"
-#include "List.h"
-
-#define ENABLE_BA_PANIC        1
-#if ENABLE_BA_PANIC
-#define BA_PANIC(x)    { PANIC(x); BlockAllocator::fPanic = true; }
-#endif
-
-class AllocationInfo;
-
-// BlockAllocator
-class BlockAllocator {
-public:
-       BlockAllocator(size_t areaSize);
-       ~BlockAllocator();
-
-       status_t InitCheck() const;
-
-       BlockReference *AllocateBlock(size_t usableSize);
-       void FreeBlock(BlockReference *block);
-       BlockReference *ResizeBlock(BlockReference *block, size_t usableSize);
-
-       size_t GetAvailableBytes() const        { return fAreaCount * 
fAreaSize; }
-       size_t GetFreeBytes() const                     { return fFreeBytes; }
-       size_t GetUsedBytes() const                     { return fAreaCount * 
fAreaSize
-                                                                               
                 - fFreeBytes; }
-
-public:
-       class Area;
-       class AreaBucket;
-
-       // debugging only
-       bool SanityCheck(bool deep = false) const;
-       bool CheckArea(Area *area);
-       bool CheckBlock(Block *block, size_t minSize = 0);
-       bool CheckBlock(BlockReference *reference, size_t minSize = 0);
-       void GetAllocationInfo(AllocationInfo &info);
-
-private:
-       inline Area *_AreaForBlock(Block *block);
-       Block *_AllocateBlock(size_t usableSize, bool dontCreateArea = false);
-       void _FreeBlock(Area *area, Block *block, bool freeReference);
-       inline void _RethinkAreaBucket(Area *area, AreaBucket *bucket,
-                                                                  bool 
needsDefragmenting);
-       inline bool _DefragmentingRecommended();
-       bool _Defragment();
-
-private:
-       BlockReferenceManager           fReferenceManager;
-       AreaBucket                                      *fBuckets;
-       int32                                           fBucketCount;
-       size_t                                          fAreaSize;
-       int32                                           fAreaCount;
-       size_t                                          fFreeBytes;
-#if ENABLE_BA_PANIC
-public:
-       static bool                                     fPanic;
-#endif
-};
-
-#endif // BLOCK_ALLOCATOR_H
diff --git a/src/add-ons/kernel/file_systems/ramfs/BlockAllocatorArea.cpp 
b/src/add-ons/kernel/file_systems/ramfs/BlockAllocatorArea.cpp
deleted file mode 100644
index 7382998e0a..0000000000
--- a/src/add-ons/kernel/file_systems/ramfs/BlockAllocatorArea.cpp
+++ /dev/null
@@ -1,599 +0,0 @@
-/*
- * Copyright 2007, Ingo Weinhold, ingo_weinhold@xxxxxx.
- * All rights reserved. Distributed under the terms of the MIT license.
- */
-
-#include "BlockAllocatorArea.h"
-#include "DebugSupport.h"
-
-// constructor
-BlockAllocator::Area::Area(area_id id, size_t size)
-       : fBucket(NULL),
-         fID(id),
-         fSize(size),
-         fFreeBytes(0),
-         fFreeBlockCount(1),
-         fUsedBlockCount(0),
-         fFirstBlock(NULL),
-         fLastBlock(NULL),
-         fFirstFree(NULL),
-         fLastFree(NULL)
-{
-       size_t headerSize = block_align_ceil(sizeof(Area));
-       fFirstFree = (TFreeBlock*)((char*)this + headerSize);
-       fFirstFree->SetTo(NULL, block_align_floor(fSize - headerSize), false, 
NULL,
-                                         NULL);
-       fFirstBlock = fLastBlock = fLastFree = fFirstFree;
-       fFreeBytes = fFirstFree->GetUsableSize();
-}
-
-// Create
-BlockAllocator::Area *
-BlockAllocator::Area::Create(size_t size)
-{
-       Area *area = NULL;
-       void *base = NULL;
-#if USER
-       area_id id = create_area("block alloc", &base, B_ANY_ADDRESS,
-                                                        size, B_NO_LOCK, 
B_READ_AREA | B_WRITE_AREA);
-#else
-       area_id id = create_area("block alloc", &base, B_ANY_KERNEL_ADDRESS,
-                                                        size, B_FULL_LOCK, 
B_READ_AREA | B_WRITE_AREA);
-#endif
-       if (id >= 0) {
-               area = new(base) Area(id, size);
-       } else {
-               ERROR("BlockAllocator::Area::Create(%lu): Failed to create 
area: %s\n",
-                       size, strerror(id));
-       }
-       return area;
-}
-
-// Delete
-void
-BlockAllocator::Area::Delete()
-{
-       delete_area(fID);
-}
-
-// AllocateBlock
-Block *
-BlockAllocator::Area::AllocateBlock(size_t usableSize, bool dontDefragment)
-{
-if (kMinBlockSize != block_align_ceil(sizeof(TFreeBlock))) {
-FATAL("kMinBlockSize is not correctly initialized! Is %lu, but should be: "
-"%lu\n", kMinBlockSize, block_align_ceil(sizeof(TFreeBlock)));
-BA_PANIC("kMinBlockSize not correctly initialized.");
-return NULL;
-}
-       if (usableSize == 0)
-               return NULL;
-       Block *newBlock = NULL;
-       size_t size = max(usableSize + sizeof(BlockHeader), kMinBlockSize);
-       size = block_align_ceil(size);
-       if (size <= _GetBlockFreeBytes()) {
-               // find first fit
-               TFreeBlock *block = _FindFreeBlock(size);
-               if (!block && !dontDefragment) {
-                       // defragmenting is necessary
-                       _Defragment();
-                       block = _FindFreeBlock(size);
-                       if (!block) {
-                               // no free block
-                               // Our data structures seem to be corrupted, 
since
-                               // _GetBlockFreeBytes() promised that we would 
have enough
-                               // free space.
-                               FATAL("Couldn't find free block of min size %lu 
after "
-                                          "defragmenting, although we should 
have %lu usable free "
-                                          "bytes!\n", size, 
_GetBlockFreeBytes());
-                               BA_PANIC("Bad area free bytes.");
-                       }
-               }
-               if (block) {
-                       // found a free block
-                       size_t remainder = block->GetSize() - size;
-                       if (remainder >= kMinBlockSize) {
-                               // enough space left for a free block
-                               Block *freePrev = block->GetPreviousBlock();
-//                             TFreeBlock *prevFree = 
block->GetPreviousFreeBlock();
-//                             TFreeBlock *nextFree = 
block->GetNextFreeBlock();
-//                             newBlock = block;
-                               _MoveResizeFreeBlock(block, size, remainder);
-                               // setup the new block
-//                             newBlock->SetSize(size, true);
-//                             newBlock->SetFree(false);
-                               newBlock = _MakeUsedBlock(block, 0, freePrev, 
size, true);
-                       } else {
-                               // not enough space left: take the free block 
over completely
-                               // remove the block from the free list
-                               _RemoveFreeBlock(block);
-                               newBlock = block;
-                               newBlock->SetFree(false);
-                       }
-                       if (fFreeBlockCount)
-                               fFreeBytes -= newBlock->GetSize();
-                       else
-                               fFreeBytes = 0;
-                       fUsedBlockCount++;
-               }
-       }
-       D(SanityCheck());
-       return newBlock;
-}
-
-// FreeBlock
-void
-BlockAllocator::Area::FreeBlock(Block *block, bool dontDefragment)
-{
-       if (block) {
-               // mark the block free and insert it into the free list
-               block->SetFree(true);
-               TFreeBlock *freeBlock = (TFreeBlock*)block;
-               _InsertFreeBlock(freeBlock);
-               fUsedBlockCount--;
-               if (fFreeBlockCount == 1)
-                       fFreeBytes += freeBlock->GetUsableSize();
-               else
-                       fFreeBytes += freeBlock->GetSize();
-               // try coalescing with the next and the previous free block
-D(SanityCheck());
-               _CoalesceWithNext(freeBlock);
-D(SanityCheck());
-               _CoalesceWithNext(freeBlock->GetPreviousFreeBlock());
-               // defragment, if sensible
-               if (!dontDefragment && _DefragmentingRecommended())
-                       _Defragment();
-               D(SanityCheck());
-       }
-}
-
-// ResizeBlock
-Block *
-BlockAllocator::Area::ResizeBlock(Block *block, size_t newUsableSize,
-                                                                 bool 
dontDefragment)
-{
-//PRINT(("Area::ResizeBlock(%p, %lu)\n", block, newUsableSize));
-// newUsableSize must be >0 !
-       if (newUsableSize == 0)
-               return NULL;
-       Block *resultBlock = NULL;
-       if (block) {
-               size_t size = block->GetSize();
-               size_t newSize = max(newUsableSize + sizeof(BlockHeader),
-                                                        kMinBlockSize);
-               newSize = block_align_ceil(newSize);
-               if (newSize == size) {
-                       // size doesn't change: nothing to do
-                       resultBlock = block;
-               } else if (newSize < size) {
-                       // shrink the block
-                       size_t sizeDiff = size - newSize;
-                       Block *nextBlock = block->GetNextBlock();
-                       if (nextBlock && nextBlock->IsFree()) {
-                               // join the space with the adjoining free block
-                               TFreeBlock *freeBlock = 
nextBlock->ToFreeBlock();
-                               _MoveResizeFreeBlock(freeBlock, -sizeDiff,
-                                                                        
freeBlock->GetSize() + sizeDiff);
-                               // resize the block and we're done
-                               block->SetSize(newSize, true);
-                               fFreeBytes += sizeDiff;
-                       } else if (sizeDiff >= sizeof(TFreeBlock)) {
-                               // the freed space is large enough for a free 
block
-                               TFreeBlock *newFree = _MakeFreeBlock(block, 
newSize, block,
-                                       sizeDiff, nextBlock, NULL, NULL);
-                               _InsertFreeBlock(newFree);
-                               block->SetSize(newSize, true);
-                               if (fFreeBlockCount == 1)
-                                       fFreeBytes += newFree->GetUsableSize();
-                               else
-                                       fFreeBytes += newFree->GetSize();
-                               if (!dontDefragment && 
_DefragmentingRecommended())
-                                       _Defragment();
-                       }       // else: insufficient space for a free block: 
no changes
-                       resultBlock = block;
-               } else {
-//PRINT(("  grow...\n"));
-                       // grow the block
-                       size_t sizeDiff = newSize - size;
-                       Block *nextBlock = block->GetNextBlock();
-                       if (nextBlock && nextBlock->IsFree()
-                               && nextBlock->GetSize() >= sizeDiff) {
-//PRINT(("  adjoining free block\n"));
-                               // there is a adjoining free block and it is 
large enough
-                               TFreeBlock *freeBlock = 
nextBlock->ToFreeBlock();
-                               size_t freeSize = freeBlock->GetSize();
-                               if (freeSize - sizeDiff >= sizeof(TFreeBlock)) {
-                                       // the remaining space is still large 
enough for a free
-                                       // block
-                                       _MoveResizeFreeBlock(freeBlock, 
sizeDiff,
-                                                                               
 freeSize - sizeDiff);
-                                       block->SetSize(newSize, true);
-                                       fFreeBytes -= sizeDiff;
-                               } else {
-                                       // the remaining free space wouldn't be 
large enough for
-                                       // a free block: consume the free block 
completely
-                                       Block *freeNext = 
freeBlock->GetNextBlock();
-                                       _RemoveFreeBlock(freeBlock);
-                                       block->SetSize(size + freeSize, 
freeNext);
-                                       _FixBlockList(block, 
block->GetPreviousBlock(), freeNext);
-                                       if (fFreeBlockCount == 0)
-                                               fFreeBytes = 0;
-                                       else
-                                               fFreeBytes -= freeSize;
-                               }
-                               resultBlock = block;
-                       } else {
-//PRINT(("  no adjoining free block\n"));
-                               // no (large enough) adjoining free block: 
allocate
-                               // a new block and copy the data to it
-                               BlockReference *reference = 
block->GetReference();
-                               resultBlock = AllocateBlock(newUsableSize, 
dontDefragment);
-                               block = reference->GetBlock();
-                               if (resultBlock) {
-                                       resultBlock->SetReference(reference);
-                                       memcpy(resultBlock->GetData(), 
block->GetData(),
-                                                  block->GetUsableSize());
-                                       FreeBlock(block, dontDefragment);
-                                       resultBlock = reference->GetBlock();
-                               }
-                       }
-               }
-       }
-       D(SanityCheck());
-//PRINT(("Area::ResizeBlock() done: %p\n", resultBlock));
-       return resultBlock;
-}
-
-// SanityCheck
-bool
-BlockAllocator::Area::SanityCheck() const
-{
-       // area ID
-       if (fID < 0) {
-               FATAL("Area ID < 0: %" B_PRIx32 "\n", fID);
-               BA_PANIC("Bad area ID.");
-               return false;
-       }
-       // size
-       size_t areaHeaderSize = block_align_ceil(sizeof(Area));
-       if (fSize < areaHeaderSize + sizeof(TFreeBlock)) {
-               FATAL("Area too small to contain area header and at least one 
free "
-                          "block: %lu bytes\n", fSize);
-               BA_PANIC("Bad area size.");
-               return false;
-       }
-       // free bytes
-       if (fFreeBytes > fSize) {
-               FATAL("Free size greater than area size: %lu vs %lu\n", 
fFreeBytes,
-                          fSize);
-               BA_PANIC("Bad area free bytes.");
-               return false;
-       }
-       // block count
-       if (fFreeBlockCount + fUsedBlockCount == 0) {
-               FATAL("Area contains no blocks at all.\n");
-               BA_PANIC("Bad area block count.");
-               return false;
-       }
-       // block list
-       uint32 usedBlockCount = 0;
-       uint32 freeBlockCount = 0;
-       size_t freeBytes = 0;
-       if (!fFirstBlock || !fLastBlock) {
-               FATAL("Invalid block list: first or last block NULL: first: %p, 
"
-                          "last: %p\n", fFirstBlock, fLastBlock);
-               BA_PANIC("Bad area block list.");
-               return false;
-       } else {
-               // iterate through block list and also check free list
-               int32 blockCount = fFreeBlockCount + fUsedBlockCount;
-               Block *block = fFirstBlock;
-               Block *prevBlock = NULL;
-               Block *prevFree = NULL;
-               Block *nextFree = fFirstFree;
-               bool blockListOK = true;
-               for (int32 i = 0; i < blockCount; i++) {
-                       blockListOK = false;
-                       if (!block) {
-                               FATAL("Encountered NULL in block list at index 
%" B_PRId32 ", although "
-                                          "list should have %" B_PRId32 " 
blocks\n", i, blockCount);
-                               BA_PANIC("Bad area block list.");
-                               return false;
-                       }
-                       uint64 address = (addr_t)block;
-                       // block within area?
-                       if (address < (addr_t)this + areaHeaderSize
-                               || address + sizeof(TFreeBlock) > (addr_t)this 
+ fSize) {
-                               FATAL("Utterly mislocated block: %p, area: %p, "
-                                          "size: %lu\n", block, this, fSize);
-                               BA_PANIC("Bad area block.");
-                               return false;
-                       }
-                       // block too large for area?
-                       size_t blockSize = block->GetSize();
-                       if (blockSize < sizeof(TFreeBlock)
-                               || address + blockSize > (addr_t)this + fSize) {
-                               FATAL("Mislocated block: %p, size: %lu, area: 
%p, "
-                                          "size: %lu\n", block, blockSize, 
this, fSize);
-                               BA_PANIC("Bad area block.");
-                               return false;
-                       }
-                       // alignment
-                       if (block_align_floor(address) != address
-                               || block_align_floor(blockSize) != blockSize) {
-                               FATAL("Block %" B_PRId32 " not properly 
aligned: %p, size: %lu\n",
-                                          i, block, blockSize);
-                               BA_PANIC("Bad area block.");
-                               return false;
-                       }
-                       // previous block
-                       if (block->GetPreviousBlock() != prevBlock) {

[ *** diff truncated: 861 lines dropped *** ]


############################################################################

Commit:      181d68fbd47bad25119ef630595e80ee6c59d051
URL:         https://git.haiku-os.org/haiku/commit/?id=181d68fbd47b
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 22:47:56 2019 UTC

ramfs: GCC 2 fixes.

----------------------------------------------------------------------------

############################################################################

Commit:      69c34116f08637212b78859b8e71922d408b1475
URL:         https://git.haiku-os.org/haiku/commit/?id=69c34116f086
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sat Aug 31 22:48:08 2019 UTC

ram_disk: Add note about code duplication with ramfs.

----------------------------------------------------------------------------

############################################################################

Commit:      a9be0efb2e387ff9c9ea2c37ef606e2be2b0a897
URL:         https://git.haiku-os.org/haiku/commit/?id=a9be0efb2e38
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sun Sep  1 00:36:38 2019 UTC

kernel/fs: Add support for setting custom VMCaches in vnodes.

This adds one (private) VFS function, and checks in all usages of
the vnode->cache as a VMVnodeCache that it really is one. (Generic
usages, for the moment just the ReleaseRef() calls in vnode
destruction, are intentionally not touched.)

This will be used by ramfs to set the cache from its own,
so that map_file() calls on a ramfs can work.

----------------------------------------------------------------------------

############################################################################

Revision:    hrev53442
Commit:      58a582ff22126e500b8fa44d822c5365d0f69da7
URL:         https://git.haiku-os.org/haiku/commit/?id=58a582ff2212
Author:      Augustin Cavalier <waddlesplash@xxxxxxxxx>
Date:        Sun Sep  1 00:37:27 2019 UTC

ramfs: Set the vnode's cache object when opening files.

Now it is possible to run applications, do Git checkouts, etc.
on a ramfs (and those seem to work just fine -- a git checkout
followed by a git fsck both succeeded.)

----------------------------------------------------------------------------


Other related posts:

  • » [haiku-commits] haiku: hrev53442 - src/add-ons/kernel/file_systems/ramfs - waddlesplash