error in coracle Coherence "No storage-enabled nodes exist for service DistributedSessions"

12,659

Solution 1

Coherence requires at least one storage enabled server in the cluster. The cache server you started is not storage enabled.

As an example, in .\bin directory of the coherence install, there is a coherence.cmd/sh By default, it is not storage enabled. You can run cache-server.cmd to start a storage enabled cache server. Then, run coherence.cmd in another windows to start a second storage disabled server.

Alternatively, you can edit coherence.cmd to change "set storage_enabled=false" to "set storage_enabled=true". Then you should be able to put data into the cache from the coherence.cmd command prompt.

Alternatively : you can enable local storage in one of the vms with (-Dtangosol.coherence.distributed.localstorage=true).

If it did not work then it could be memory issue "not sufficient memory and cannot to load any further data.".

Solution 2

as much I remember, localstorage=false says to the service to avoid loading data at all, so over 10 M of records, I guess you coherence lacks of memory and cannot load any more data. Try changing your eviction policy as well, but from my point of view, you localstorage might be true. This property is in use on proxies, in order to say them to act or not as servers as well.

Share:
12,659
user2965814
Author by

user2965814

Updated on June 05, 2022

Comments

  • user2965814
    user2965814 almost 2 years

    I made java application to load data to distributed Cache. Application load data well but when loading over than 10 million of record, I’ am getting “No storage-enabled nodes exist for service DistributedSessions” error.but when I load less than 10 million it working good. I create one cluster in web logic and join 4 nods as the following:

    • 2 servers (Storage enable =true) to store data

    • 2 client (Storage enable =false) to view and query only

    tangosol-coherence-override.xml

    <cluster-config>
        <member-identity>
            <cluster-name system-property="tangosol.coherence.cluster">CLUSTER_NAME</cluster-name>
        </member-identity>
        <multicast-listener>
            <time-to-live system-property="tangosol.coherence.ttl">30</time-to-live>
            <address>224.1.1.1</address>
            <port>12346</port>
        </multicast-listener>
    
    </cluster-config>
    
    <logging-config>
    

    coherence-cache-config.xml

    <?xml version="1.0"?>
    

    <serializer system-property="tangosol.coherence.serializer"/>
    
    
    <socket-provider system-property="tangosol.coherence.socketprovider"/>
    

    <cache-mapping>
      <cache-name>*</cache-name>
      <scheme-name>example-distributed</scheme-name>
    </cache-mapping>
    

          <scheme-name>example-distributed</scheme-name>
    
          <service-name>DistributedCache</service-name>
    
      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>example-binary-backing-map</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
    
      <autostart>true</autostart>
    </distributed-scheme>
    
    <local-scheme>
      <scheme-name>example-binary-backing-map</scheme-name>
    
      <eviction-policy>HYBRID</eviction-policy>
      <high-units>{back-size-limit 0}</high-units>
      <unit-calculator>BINARY</unit-calculator>
      <expiry-delay>0</expiry-delay>
    
      <cachestore-scheme></cachestore-scheme>
    </local-scheme>
    

    Server Argument:

    -Xms6g

    -Xmx12g

    -Xincgc

    -XX:-UseGCOverheadLimit

    -Dtangosol.coherence.distributed.localstorage=true

    -Dtangosol.coherence.cluster=CLUSTER_NAME

    -Dtangosol.coherence.clusteraddress=224.1.1.1

    -Dtangosol.coherence.clusterport=12346

    Client Argument:

    -Xms1g

    -Xmx1g

    -Xincgc

    -XX:-UseGCOverheadLimit

    -Dtangosol.coherence.distributed.localstorage=false

    -Dtangosol.coherence.session.localstorage=true

    -Dtangosol.coherence.cluster= CLUSTER_NAME

    -Dtangosol.coherence.clusteraddress=224.1.1.1

    -Dtangosol.coherence.clusterport=12346

  • user2965814
    user2965814 over 8 years
    according to oracle documentations localstorage=false mean that this current JVM disable storage but not all JVMs
  • Samuelens
    Samuelens over 8 years
    That's correct, so this specific node will not serve for holding nor for writing or reading, so disabling the localstorage on a non-proxy JVM doesn't make sense
  • mwoodman
    mwoodman over 5 years
    A note for Coherence 12.2.1: The System Property name has become "coherence.distributed.localstorage"