replaced disk cannot added to vxdiskadm after removed for replacement
disk cannot added to vxdiskadm after removed for replacement
SQL Server with Veritas HA
Hi All,
I am configuring SQL Cluster using VERITAS HA and DR replication using VVR,
I have configured the cluster and storage as per the cluster implimentation document with same sql instance installations but in SQL Server agent configuration wizard it gives an error stating there are no instances available to configure.
can someone help me with a solution for the issue I'm facing?
BR,
Encryption of Data at Rest
Read More
ASM with infoscale 7.1
HI all
we want to replicate oracle database with infoscale 7.1.
when created asm volume ASM not register volume as ASM disks.
/etc/init.d/oracleasm createdisk ASM01 /dev/vx/dsk/oracledg/asm01
/etc/vx/bin/vxisasm /dev/vx/dsk/oracledg/asm01.
Could you please help me resolve this issue.
NB: we don't have RAC.
Regards
Remove or modify HA Fire Drill actions
Looking for a method to modify or remove particular action points for the HA Fire Drill.
As an example, the Mount Agent action 'mountentry.vfd' specifically checks for mount entries in /etc/filesystems. By chance, our environment and admins require entries to be left in /etc/filesystems but auto mount to FALSE. The configuration is 'ok' for VCS, but will fail the Virtual Fire Drill every time.
Removing this action point, or modifying to accept as configured, would allow HA Fire Drill to have valid response of 'Success' or 'Failure' for this environment.
Example /etc/filesystem for VCS resource entry:
/ora/eomdma/admin:
dev = /dev/eomdma_admin
vfs = jfs2
log = /dev/eomdma_log
mount = false
check = false
account = false
Example non-VCS filesystem
/app/tws/twsq:
dev = /dev/lv_TWSq
vfs = jfs2
log = /dev/hd8
mount = true
quota = no
account = false
Is the only option to forcibly modify the Mount Agent attributes?
VxVM vxdg ERROR V-5-1-10978 Disk group nbu_dg: import failed:No valid disk found containing disk group
Hi,
I have a 2 node netbackup cluster(VCS). Earlier today I migrated a volume from an old storage array to a new storage array. How I did it is:
1. Present new Disk into the hosts
2. Scan for new disk on OS level
3. Scan for new disk on Veritas
4. Used the vxdiskadm utilitity to initialized the new disk
5. Added the new disk into the DiskGroup
6. Mirrored the volume to the new disk
7. After synchronization had completed, I removed the old plex from the disk group
All of the above steps were done on the active node(NODE1), now when I try to failover the cluster resources to the inactive node(NODE2) i get the below error:
VxVM vxdg ERROR V-5-1-10978 Disk group nbu_dg: import failed:No valid disk found containing disk group
Then the cluster fails back again to the original node(NODE1)
bash-3.2# vxdisk -o alldgs list (Active node)
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:ZFS - - ZFS
emc_clariion0_0 auto:cdsdisk - - online(Old disk)
emc_clariion0_1 auto:cdsdisk nbu_dg02 nbu_dg online (New disk)
=========================================================
bash-3.2# vxdisk -o alldgs list (Inactive node)
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:ZFS - - ZFS
emc_clariion0_0 auto:cdsdisk - - online (Old Disk)
From the above output, I can't see the new disk which is supposed to show up in the inactive node with the disk group in a deported state.
Please assist
Regards,
I have 8 node cluster which is having CVM, I want a procedure to upgrade VCS from 5.0 to 5.1. how to upgrade VCS if CVM is running
I have 8 node cluster which is having CVM, I want a procedure to upgrade VCS from 5.0 to 5.1. how to upgrade VCS if CVM is running
vxdcli is in restarting mode
SFHA = 6.0
O.S = Solaris 10
Error Log : ,svc:/system/vxdcli:default: Method "/opt/VRTSsfmh/etc/vxdcli.sh start" failed with exit status
Description :
Vxdcli is not starting and gving mention error log, when i checked status of vxdcli, it is in restarting mode. I investigate further and found that this is occurred due to host is not communicating with VOM, Please help me to sort out this issue. This is my live environment.
root@prod-phx-pri # /opt/VRTSsfmh/etc/vxdcli.sh status
RESTARTING
Check lock dir /var/vx/dcli/vxdcli.sh.lock
root@prod-phx-pri # pkginfo VRTSdcli
ERROR: information for "VRTSdcli" was not found
root@prod-phx-pri # pkgchk VRTSsfmh (no output)
root@prod-phx-pri # pkginfo - -l VRTSfmh
ERROR: information for "VRTSfmh" was not found
How to add virtual fencing disk to KVM guest
Hi,
I'm trying to install Veritas Storage Foundation Cluster File System HA 6.0.3 to 6 KVM RHEL guests.
I installed RHEL6.4 and created 3 virtual fencing disks using qemu-img. I added them as SCSI disks to the node and did the vxdisksetup for them. Now when I tried to configure fencing, I get the error below.
I'm wondering if anyone has installed cluster file system into KVM guest environment and could tell me how to add virtual fencing disks so that fencing configuration would be successful.
This is what I tried on the KVM:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/home/VM/VMImages/IOFencing1.img'/>
<target dev='sda' bus='scsi'/>
<shareable/>
<alias name='scsi0-0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/home/VM/VMImages/IOFencing2.img'/>
<target dev='sdb' bus='scsi'/>
<shareable/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/home/VM/VMImages/IOFencing3.img'/>
<target dev='sdc' bus='scsi'/>
<shareable/>
<alias name='scsi0-0-0-2'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
Here is how the disks look like:
[root@node1 installsfcfsha601-201605190528YQa]# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:cdsdisk - - online
disk_1 auto:cdsdisk - - online
disk_2 auto:cdsdisk - - online
disk_3 auto:none - - online invalid
disk_4 auto:none - - online invalid
vda auto:none - - online invalid
[root@node1 installsfcfsha601-201605190528YQa]#
[root@node1 installsfcfsha601-201605190528YQa]# cat /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/disk_0 QEMU%5FQEMU%20HARDDISK%5FDISKS%5Fdrive-scsi0-0-0-0
/dev/vx/rdmp/disk_1 QEMU%5FQEMU%20HARDDISK%5FDISKS%5Fdrive-scsi0-0-0-1
/dev/vx/rdmp/disk_2 QEMU%5FQEMU%20HARDDISK%5FDISKS%5Fdrive-scsi0-0-0-2
[root@node1 installsfcfsha601-201605190528YQa]#
The error I get:
kernel: I/O Fencing DISABLED!VXFEN INFO V-11-1-35 Fencing driver going into RUNNING state
kernel: GAB INFO V-15-1-20032 Port b closed
kernel: GAB INFO V-15-1-20229 Client VxFen deiniting GAB API
kernel: VXFEN INFO V-11-1-36 VxFEN configured at protocol version 30
kernel: GAB INFO V-15-1-20230 Client VxFen inited GAB API with handle ffff8803b8288ac0
kernel: GAB INFO V-15-1-20036 Port b[VxFen (refcount 2)] gen 93aa25 membership 0
kernel: GAB INFO V-15-1-20038 Port b[VxFen (refcount 2)] gen 93aa25 k_jeopardy ;12345
kernel: GAB INFO V-15-1-20040 Port b[VxFen (refcount 2)] gen 93aa25 visible ;12345
kernel: VXFEN WARNING V-11-1-12 Potentially a
kernel: preexisting split-brain.
kernel: Dropping out of cluster.
kernel: Refer to user documentation for
kernel: steps required to clear preexisting
kernel: split-brain.
Quality of Service - Maxiops SLA
Read More
Get More Out Of SmartIO
What are your needs for persistent storage with Docker?
In a few days you will hear about a new version for our Docker Plug-in for InfoScale where we take advantage of new InfoScale 7.1 capabilities to provide quality of service and avoid the noisy neighbour problem. That means you will no longer have to worry about those uncontrolled applications that suddenly start affecting the performance of others. With 7.1 and the new integration, when Docker is creating a volume we can limit the maximum number of IOs per second it will be serving. It will be as easy as running:
docker volume create –d veritas –-name <volname> –o maxiops=10000
Now we would like to let you know other integrations we have in our pipeline and we would like to hear from you which ones are the most needed, so you can satisfy your needs first. Please click on the link to fill a small survey.
This is a description of the different things we are working on and we need your feedback:
I/O Acceleration
This feature will allow the Docker user to specify what volumes will need a higher I/O bandwidth and/or smaller latency so the data will be automatically cached on local SSDs in the hosts. When a container is reading data, it will be cached on local SSDs, so the next reads will be served from the SSDs. The only thing the Docker user will have to do is to specify that the volume needs to be associated with a cache by enabling iocache flag:
docker volume create –d veritas –-name <volname> –o iocache=on
Snapshots for local clones
Allow the Docker users to take snapshots of their persistent storage and make it available in any other hosts. This will allow the Docker user to work with different copies of the same data within the same cluster, bringing up containers that will be using that copy. We just need to specify the name of the new volume and what the name of the existing volume is:
docker volume create –d veritas –-name <volname> –o sourcevol=<volname>
Snapshots for backup/restore
Allow the Docker user to take snapshots that can be used for backup and restore purposes. To take a snapshot the user should only trigger this command:
docker volume snapshot –d veritas <volname>
To restore from one specify snapshot copy:
docker volume restore -d veritas <volname> <snap–number>
And to list the snapshots that are available:
docker volume snaplist –d veritas
Remote-clones
Take the persistent data copy of a container and make it available in any other cluster across any distance. This allows Docker users to work with different copies of the same data across different clusters.
On the local cluster we make the volume available:
docker volume export -d veritas <volname>
On the remote cluster we just use that copy:
docker volume create –d veritas –-name <volname> –o remote_sourcevol=<volname>
Policy Management
Ability to create policies like gold, bronze, etc, so the Docker user can just point to them when creating containers:
docker volume create –d veritas –name <volname> –o policy=<class>
Integration with Kubernetes
Allow Kubernetes to run on top of an InfoScale Cluster
Integration with Docker SWARM
Allow Docker SWARM to run on top of an InfoScale Cluster
Graphical User Interface
Understand in a graphical view how the cluster is performing, how each container is using storage, where the storage is located and how IOs are balanced across the infrastructure
Encryption
Being able to encrypt a volume when created from the Docker CLI.
docker volume create –d veritas –-name <volname> –o encrypt=on
Please fill our survey and give us your feedback:
Query regarding stopping of vcs
Hi Team,
Could you please suggest on this,
To stop the vcs and leaves the application running we use,
hastop -all -force
and then start the vcs with hastart.
But, under which condition do we have to use this command "hastop -all -force." when this operation will require to keep the application running and do VCS stop and start.
When this is required to use " hastop -local " OR hastop -all to stop the VCS and application.
Please explain.
Thanks
Allaboutunix
vxconfigd core dumps at vxdisk scandisks after zpool removed from ldom
Hi
I'm testing InfoScale 7.0 on Solaris with LDoms. Creating a ZPOOL in the LDom works.
It seems there is something not working properly. On the LDom Console I see
May 23 16:19:45 g0102 vxdmp: [ID 557473 kern.warning] WARNING: VxVM vxdmp V-5-3-2065 dmp_devno_to_devidstr ldi_get_devid failed for devno 0x11500000000
May 23 16:19:45 g0102 vxdmp: [ID 423856 kern.warning] WARNING: VxVM vxdmp V-5-0-2046 : Failed to get devid for device 0x20928e88
After I destroy the ZPOOL, I would like to remove the Disk from the LDom.
To be able to do that I remove and disable the disk
/usr/sbin/vxdmpadm -f disable path=c1d1s2
/usr/sbin/vxdisk rm c1d1s2
After this I'm able to remove the Disk from the LDom using ldm remove-vdisk.
The dmp configuration is not cleaned up.
# /usr/sbin/vxdmpadm getsubpaths ctlr=c1
NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS
================================================================================
NONAME DISABLED(M) - NONAME OTHER_DISKS other_disks STANDBY
c1d0s2 ENABLED(A) - c1d0s2 OTHER_DISKS other_disks -
#
If I run vxdisk scandisks at this stage, the vxdisk command hangs and the vxconfigd core dumps:
# file core
core: ELF 32-bit MSB core file SPARC Version 1, from 'vxconfigd'
# pstack core
core 'core' of 378: vxconfigd -x syslog -m boot
------------ lwp# 1 / thread# 1 ---------------
001dc018 ddl_get_disk_given_path (0, 0, 0, 0, 66e140, 0)
001d4230 ddl_reconfigure_all (49c00, 0, 400790, 3b68e8, 404424, 404420) + 690
001b0bfc ddl_find_devices_in_system (492e4, 3b68e8, 42fbec, 4007b4, 4db34, 0) + 67c
0013ac90 find_devices_in_system (2, 3db000, 3c00, 50000, 0, 3d9400) + 38
000ae630 ddl_scan_devices (3fc688, 654210, 0, 0, 0, 3fc400) + 128
000ae4f4 req_scan_disks (660d68, 44fde8, 0, 654210, ffffffec, 3fc400) + 18
00167958 request_loop (1, 44fde8, 3eb2e8, 1800, 19bc, 1940) + bfc
0012e1e8 main (3d8000, ffbffcd4, ffffffff, 42b610, 0, 33bb7c) + f2c
00059028 _start (0, 0, 0, 0, 0, 0) + 108
Thanks,
Marcel
Erasure coding for data storage - Primer
Can we use "IP Options" attribute of IP for src address selection
Environment
SFHA = 6.1.1
OS = RHEL 5.9
Virtual IP = 192.168.0.1
Query
Can we use "IP Options" attribute of IP resource for src IP address selection without mentioning default gateway in "IP Options". Example IP ROUTE COMMAND below
ip route replace default via 192.168.0.254 src 192.168.0.1
Due to this I want to achive that on outgoing traffic the virtual IP will be the source IP instead of Physical IP which is x.y.z.a
Need Help Regarding Microsoft Dynamics AX CRM Apps
My company is examining prospects of integrating Microsoft Dynamics AX with the e-commerce platform we are using. I would therefore like to know any mobile solution such as iPhone app through which all CRM data can easily be integrated without any hassle. One of the places I recently visited for such app is http://dynamics.folio3.com/dynamics-ax-sales-marketing-crm-app/
Do let me know if there is any solution that can be offered for this problem.
cfsmount1 &cfsmount2 resource could not offline
I met a problem about vcs.
Environment:
HW T5220 Server *2 + ax4-5;
SW EMM8 ICP1505
Problem description:
When executing “init 6” or “hastop –all” command in cluster system, resource cfsmount1&cfsmount2 could not been offline normally;
Checked with the HW state(EMC connective state ,disk, system, iostat –En),the output of “vxdisk , vxprint, vxdmpadm, fuser, mount –v etc.”
I tried to umount /var/opt/mediation/MMStorage manually, it did not succeed and it look like the process has hung up;
Please see check list in attach file check_point.log , engine_A.log and main.cf .
Could you give me some advice about how to fix the problem?
cfsmount1 &cfsmount2 resource could not offline
I met a problem about vcs.
Environment:
HW T5220 Server *2 + ax4-5;
Problem description:
When executing “init 6” or “hastop –all” command in cluster system, resource cfsmount1&cfsmount2 could not been offline normally;
Checked with the HW state(EMC connective state ,disk, system, iostat –En),the output of “vxdisk , vxprint, vxdmpadm, fuser, mount –v etc.”
I tried to umount /var/opt/mediation/MMStorage manually, it did not succeed and it look like the process has hung up;
Please see check list in attach file check_point.log , engine_A.log and main.cf .
Could you give me some advice about how to fix the problem?
Veritas InfoScale Enterprise 7.1: Managing application I/O workloads using maximum IOPS settings
When multiple applications use a common storage subsystem, it is important to balance application I/O requests in a way that allows multiple applications to co-exist in a shared environment. You can address this need by setting a maximum threshold on the I/O operations per second (IOPS) for the volumes of an application. The volumes of an application are grouped to form an application volume group.
The maximum IOPS limit determines the maximum number of I/Os processed per second collectively by all the volumes in an application volume group. When an I/O request comes in from an application, it is serviced by the volumes in the group until the application volume group reaches the IOPS limit. When the group exceeds this limit for a specified time interval, further I/O requests on the group are queued. The queued I/Os are taken up on priority in the next time interval along with new I/O requests from the application.
About application volume groups
An application volume group is a logical grouping of volumes associated with an application. The group may contain one or more volumes. All the volumes in the application volume group must be selected from the same disk group. The volumes may belong to a private or shared disk group. Set the maximum IOPS threshold on the application volume group to balance multiple application I/O workloads. The IOPS value is set as a combined threshold for all the volumes in the application volume group.
Some of the configuration and administrative tasks to manage application I/O workloads are as follows:
- Creating application volume groups
- Setting the maximum IOPS threshold on application volume groups
- Removing the maximum IOPS setting from application volume groups
- Adding volumes to an application volume group
- Removing volumes from an application volume group
- Viewing the IOPS statistics for application volume groups
Additional helpful information about this feature can be found at:
Veritas InfoScale documentation for other releases and platforms can be found on the SORT website.