1. There are many methods:
i) Use the existing RAID controller's interface to remove the drives from an array configuration. Most controllers will tell you that by doing so you risk losing the contents of the array (because the controller will zero/nuke the metadata), which is what you want.
ii) Since these are SSDs, the next easiest way is to issue a Secure Erase (see Google).
iii) Use a tool like dd for Win32 with proper
oseek and
count=1 arguments to match the start and end of the drive along with
bs=1m. You do not want to zero the entire drive (the equivalent of a "full format", e.g. writing zeros to every LBA) -- doing this will greatly hinder/hurt the performance of the SSD. You only want to zero what needs to be zeroed.
2. If you do not remove the metadata properly from the drives, you risk issues if the drives are ever put into a system in the future which has RAID support (no matter if its the same controller or not). Many RAID controllers stick their metadata at the
end of the drive (not the start -- which is where the GPT and/or MBR would generally go, hence why you would think "installing an OS would be enough") and then use the ATA command SET MAX ADDRESS EXT to decrease the advertised capacity of the underlying drive (so that the last 1MByte of the drive can never be accessed when RAID is in use -- otherwise you'd risk touching/overwriting the metadata itself).
If the RAID methodology was Linux md (e.g. Linux's software RAID),
mdadm --zero-superblock will erase the metadata from a drive. Reference: »
en.wikipedia.org/wiki/Md ··· an_arrayWhat solution, method, or even doing any of this at all is up to you to decide -- you can choose to ignore the advice given or you can do it. You asked is there anything you should do, and the answer you'll get from me is "yes, you need to nuke the RAID metadata".